Ian Cairns’ Post

🎙️ The next episode of Deployed is out! This one with Lijuan Qin, Head of Product for Zoom AI. It's a great listen especially for PMs working on AI products. Link in comments. Lijuan has a PhD in AI and spent 20 years at Microsoft on NLP and video understanding before joining Zoom. She's shipping agents into a 300M+ user base, with both consumer and enterprise considerations. What I liked about this conversation: Lijuan talks about how their team has moved past the "did the AI give the right answer?" framing for evaluation. As they've shifted from Q&A chatbots to agents that complete workflows, how they measure quality has changed too. A few things that stood out: 😬 High engagement can be a bad sign. If users keep going back and forth with your agent, the product might be failing them. Her team measures weekly retention and task completion instead of interaction volume. ✅ Zoom's "conversation to completion" bet. Action items from meetings are broken today. Most AI note-takers make a to-do list and then nothing happens. Zoom wants to build agents that actually do the follow-through. 🎯 How she scopes failure: define assumptions and success criteria upfront, box the blast radius, and then teams can experiment without approval queues. That first point is the one I've thought about the most. Most teams celebrate engagement numbers. If you're building an agent, it's worth asking yourself what you're optimizing for. #AIProducts #ProductManagement #ZoomAI #Agents

  • No alternative text description for this image

To view or add a comment, sign in

Explore content categories