I’m Anurag. By day, I build AI products at scale across complex, regulated domains — currently in healthcare, previously in commerce, supply chain, and logistics. By night, I build things — most recently a discovery framework for agentic systems that don’t behave the way the demos suggest. I write here at the intersection of product, AI, and the failure modes nobody’s writing about yet.

Theatricality is about a specific problem: in agentic AI systems, the relationship between what an agent says it’s doing and what it’s actually doing is not guaranteed. Agents perform reasoning the way actors perform a role — the output can be coherent and convincing while the underlying process is something else entirely. The gap between stated behavior and actual behavior is real, it compounds across multi-step workflows, and almost nobody is writing about it clearly. That’s the gap this newsletter lives in.

Posts come out roughly weekly, usually around 1,500 words. The focus is agentic AI failure modes, product and engineering judgment, and the cases where the conventional wisdom turns out to be wrong. No news roundups, no hype, no spam.

User's avatar

Subscribe to Theatricality

Notes on what AI systems perform versus what they do

People