My Experiments with AI

Starting Jan 2026, I've been spending a lot of time 'vibe coding', specially with Claude. Later half of 2024 and all of 2025 were NotebookLM and Gemini for the most part. These are some of my recent experiments — projects that started with a problem I wanted to solve, turned into conversations about how to build, and evolved into tools I actually use. Each one taught me something different about working with AI on real problems.

Personal April 2026
Vibe Coding

A picture from 2007, a Mac in the Theoretical CS lab at IIT Madras, and why vibe coding with Claude feels exactly like those 5 a.m. all-nighters from undergrad. This is the opening note for everything that follows.

Read full story →
Engineering LLMs April 2026
The Assembly Problem

LLMs are good at generating analysis. What they’re bad at is assembling separately generated files into one coherent report. The fix was replacing the merge agent with 300 lines of deterministic Python — and using Claude’s own failure taxonomy as an engineering input.

Read full story →
Infrastructure Python
Building a Scraper That Works on Any Investor Relations Website

Can a single scraper work on any company's IR website without site-specific code? Eight platform types fingerprinted, a Linnaeus-inspired taxonomy, Taleb's barbell principle applied to bot evasion, and lessons learned about why discovery is harder than downloading.

Read full story →
Audio Processing Indian Classical Music
Transcribing Eight Years of Music Lessons

Using Claude to transcribe Indian classical music lessons from the guru-shishya tradition. How building a memory aid changed how I listen, why context and a "student state" document made the transcripts useful, and what it means to systematise knowledge that has resisted it for two millennia.

Read full story →
Investment Research Automation
Building a Sell-Side Research Pipeline

A five-stage pipeline that turns seventeen browser tabs into four. Intake, classification, extraction, synthesis, and flagging — each doing one thing well. The Unix pipe philosophy applied to investment research, and why defining the process was more valuable than automating it.

Read full story →
Analysis LLMs May 2023
How Bad Are the Hallucinations?

Deep dive into ChatGPT's hallucination problem. Testing accuracy on factual questions, understanding when and why models confidently state false information, and what this means for building with language models in production.

Published on Substack
Predictions AI Trends Dec 2023
Two Words for 2024

My predictions for how AI will evolve in 2024. Two key words that capture the direction of the technology, the market, and the kinds of problems people will solve with language models.

Published on Substack
Image Generation DALL-E
Dilly-DALL-E-ing

Exploring DALL-E's capabilities and quirks. Testing the boundaries of image generation, understanding what works, what breaks, and the gap between what these models can do and what we actually want them to do.

Published on Medium
Coming Soon
Your Next Experiment

More experiments coming as I continue building with AI. Each project teaches something new about problem-solving, AI capabilities, and the gap between what's theoretically possible and what actually works in practice.

Want to see more writing? Subscribe to my Substack for regular essays.

Back to Home Subscribe to Substack