From Chat to Alpha: Designing Useful Agents
Arjun Mahesh is Head of Design at Hebbia. Before Hebbia, he spent time in the studio of Frank Gehry, then BCG, Kickstarter, Stripe, and Verse, bouncing between consumer and B2B work and building the muscles to ship in high-stakes domains. He lives at the intersection of architecture, computer science, and product design, and he brings that mix to a single obsession at Hebbia: make intelligence usable in finance and law, where accuracy, auditability, and speed actually decide if software survives.
At a glance
Arjun says chat is the right on-ramp, not the destination.
Arjun explains how he introduces agentic software to conservative teams.
Arjun shows how Hebbia codifies a firm’s “alpha” into one-click agents tied to real deliverables.
Arjun walks through how the Hebbia experience is evolving from complexity to approachable, high-power flows.
Arjun outlines the role of Grid as the power surface when chat is not enough.
Arjun talks about tooling, input quality, and short iteration loops.
Arjun shares how he measures success and what he is hiring for next.
Topics
Chat is the doorway
Arjun thinks chat meets people where they are and reduces time-to-first-value. He treats it as the entry point that proves utility fast, then graduates users to higher-power surfaces only when the job requires more control or throughput.
Introducing agents to conservative contexts
Arjun says many finance users are new to agentic UX, so he frames first-run around known jobs to be done. He leans on safe defaults, concrete outcomes like diligence prep or meeting prep, and language that maps cleanly to existing analyst workflows.
Codifying “alpha” into one-click workflows
Arjun’s view is that Hebbia’s edge is codification. Proprietary research steps become templates, and proven templates graduate to one-click agents that output the artifacts teams already use in review and reporting.
Grid as the power surface
Arjun describes Grid as the surface for complex analysis once chat hits its ceiling. It spans multi-document retrieval, large prompt sets, and parameterization, making heavy workflows repeatable and auditable without forcing low-level prompt wrangling.
How the Hebbia experience is evolving
Arjun walks through the shift from an early, visually dense interface that validated the concept to a design that pins common jobs, makes inputs and outputs explicit, and supports one-click flows for meeting notes and grid analyses, with deeper controls available on demand.
Tooling and loops that actually move work
Arjun says the team uses Hebbia itself alongside V0, Midjourney or Visual Electric for visuals, and assistants like ChatGPT, Perplexity, and Elicit. His emphasis is that outputs track inputs, references and instructions matter, and short iteration loops beat one-shot prompts.
What “success” means in this category
Arjun is explicit about signals: is it selling, is it used by the target roles at depth, and is it differentiated from consumer tools and direct competitors. Novelty is fine; durable adoption that replaces legacy workflows is the bar.
What’s next and who he is hiring
Arjun thinks the next fronts are deeper codification, agent autonomy with appropriate oversight, and mobile contexts with tighter attention windows and latency budgets. On hiring, he looks for blended design-product-engineering profiles, evidence of agentic interaction thinking, understanding of retrieval and guardrails, and working prototypes over perfect decks.
Thanks for checking out our first episode. Stay in the loop on new episodes and upcoming events by subscribing.















