Best Practices for Working with AI Virtual Assistants

Explore top LinkedIn content from expert professionals.

Summary

Best practices for working with AI virtual assistants involve creating clear instructions and integrating these tools thoughtfully into your workflow so they can support your tasks reliably. An AI virtual assistant is a computer program that uses artificial intelligence to carry out tasks, answer questions, or automate processes for users, often through chat or voice interfaces.

  • Clarify your instructions: Take time to write precise, detailed prompts so your AI assistant understands exactly what you need, which leads to better results and fewer misunderstandings.
  • Integrate with your workflow: Embed AI assistants into your regular tools and processes so their help feels natural and doesn’t interrupt your routine.
  • Keep humans involved: Always review and refine the AI’s suggestions, making sure outputs are accurate and trustworthy before acting on them.
Summarized by AI based on LinkedIn member posts
  • View profile for Edward Frank Morris
    Edward Frank Morris Edward Frank Morris is an Influencer

    Forbes. LinkedIn Top Voice for AI.

    35,671 followers

    A few months ago, a colleague screamed at Microsoft Copilot like he was auditioning for Bring Me The Horizon. He typed, “Make this into a presentation.” Copilot spat out something. He yelled, “NO, I SAID PROFESSIONAL!” It revised it. Still wrong. “WHY ARE YOU SO STUPID?” And that, dear reader, is when it hit me. It’s not the AI. It’s you. Or rather, your prompts. So, if you've ever felt like ChatGPT, Copilot, Gemini, or any of those AI Agents are more "artificial" than "intelligent"? Then rethink how you’re talking to them. Here are 10 prompt engineering fundamentals that’ll stop you from sounding like you're yelling into the void. 1. Lead with Intent. Start with a clear command: “You are an expert…,” “Generate a monthly report…,” “Translate this to French…" This orients the model instantly. 2. Scope & Constraints First. Define boundaries up front. Length limits, style guides, data sources, even forbidden terms. 3. Format Your Output. Specify JSON schema, markdown headers, or table columns. Models love explicit structure over free form prose. 4. Provide Minimal, High Quality Examples. Two or three exemplar Q→A pairs beat a paragraph of explanation every time. 5. Isolate Subtasks. Break complex workflows into discrete prompts (chain of thought). One prompt per action: analyze, summarize, critique, then assemble. 6. Anchor with Delimiters. Use triple backticks or XML tags to fence inputs. Cuts hallucinations in half. 7. Inject Domain Signals. Name specific frameworks (“Use SWOT analysis,” “Apply the Eisenhower Matrix,” “Leverage Porter’s Five Forces”) to nudge depth. 8. Iterate Rapidly. Version your prompts like code. A/B test variations, track which phrasing yields the cleanest output. 9. Tune the “Why.” Always ask for reasoning steps. Always. 10. Template & Automate. Build parameterized prompt templates in your repo. Still with me? Good. Bonus tips. 1. Token Economy Awareness. Place critical context in the first 200 tokens. Anything beyond 1,500 risks context drift. 2. Temperature vs. Prompt Depth. Higher temperature amplifies creativity. Only if your prompt is concise. Otherwise you get noise. 3. Use “Chain of Questions.” Instead of one long prompt, fire sequential, linked questions. You’ll maintain context and sharpen focus. 4. Mirror the LLM’s Own Language. Scan model outputs for phrasing patterns and reflect those idioms back in your prompts. 5. Treat Prompts as Living Docs. Embed metrics in comments: note output quality, error rates, hallucination frequency. Keep iterating until ROI justifies the effort. And finally, the bit no one wants to hear. You get better at using AI by using AI. Practice like you’re training a dragon. Eventually, it listens. And when it does, it’s magic. You now know more about prompt engineering than 98% of LinkedIn. Which means you should probably repost this. Just saying. ♻️

  • View profile for Bhrugu Pange
    3,424 followers

    I’ve had the chance to work across several #EnterpriseAI initiatives esp. those with human computer interfaces. Common failures can be attributed broadly to bad design/experience, disjointed workflows, not getting to quality answers quickly, and slow response time. All exacerbated by high compute costs because of an under-engineered backend. Here are 10 principles that I’ve come to appreciate in designing #AI applications. What are your core principles? 1. DON’T UNDERESTIMATE THE VALUE OF GOOD #UX AND INTUITIVE WORKFLOWS Design AI to fit how people already work. Don’t make users learn new patterns — embed AI in current business processes and gradually evolve the patterns as the workforce matures. This also builds institutional trust and lowers resistance to adoption. 2. START WITH EMBEDDING AI FEATURES IN EXISTING SYSTEMS/TOOLS Integrate directly into existing operational systems (CRM, EMR, ERP, etc.) and applications. This minimizes friction, speeds up time-to-value, and reduces training overhead. Avoid standalone apps that add context-switching or friction. Using AI should feel seamless and habit-forming. For example, surface AI-suggested next steps directly in Salesforce or Epic. Where possible push AI results into existing collaboration tools like Teams. 3. CONVERGE TO ACCEPTABLE RESPONSES FAST Most users have gotten used to publicly available AI like #ChatGPT where they can get to an acceptable answer quickly. Enterprise users expect parity or better — anything slower feels broken. Obsess over model quality, fine-tune system prompts for the specific use case, function, and organization. 4. THINK ENTIRE WORK INSTEAD OF USE CASES Don’t solve just a task - solve the entire function. For example, instead of resume screening, redesign the full talent acquisition journey with AI. 5. ENRICH CONTEXT AND DATA Use external signals in addition to enterprise data to create better context for the response. For example: append LinkedIn information for a candidate when presenting insights to the recruiter. 6. CREATE SECURITY CONFIDENCE Design for enterprise-grade data governance and security from the start. This means avoiding rogue AI applications and collaborating with IT. For example, offer centrally governed access to #LLMs through approved enterprise tools instead of letting teams go rogue with public endpoints. 7. IGNORE COSTS AT YOUR OWN PERIL Design for compute costs esp. if app has to scale. Start small but defend for future-cost. 8. INCLUDE EVALS Define what “good” looks like and run evals continuously so you can compare against different models and course-correct quickly. 9. DEFINE AND TRACK SUCCESS METRICS RIGOROUSLY Set and measure quantifiable indicators: hours saved, people not hired, process cycles reduced, adoption levels. 10. MARKET INTERNALLY Keep promoting the success and adoption of the application internally. Sometimes driving enterprise adoption requires FOMO. #DigitalTransformation #GenerativeAI #AIatScale #AIUX

  • View profile for Amit Rawal

    Google AI Transformation Leader | Former Apple AI/ML Product | Stanford | AI Educator & Keynote Speaker

    58,142 followers

    Your AI agent sounds dumb because you haven't told it how to think. Most people build agents and hope for the best. Then wonder why it hallucinates, forgets context, or gives irrelevant answers. The truth? A poorly prompted agent will always underperform. A well-prompted agent becomes your best teammate. Here's exactly how to prompt an AI agent so it actually works: 📌 The 25 Agent Prompting Rules: 1. Define ONE job clearly – Not 20 tasks. One clear purpose. 2. List the exact tools it can use – Guardrails prevent chaos. 3. Teach it when to use each tool – Specific conditions, not guessing. 4. Set hard boundaries – What it MUST refuse, no exceptions. 5. Give personality only if necessary – Focus on function first. 6. Make it ask clarifying questions – Before it acts, it asks. 7. Force it to show reasoning – Explain the "why" before the "what." 8. Define escalation rules – When to ask a human for help. 9. Use edge case examples – Teach with real scenarios, not theory. 10. Specify exact output format – JSON, bullet points, tables—be precise. 11. Add a verification step – Check facts before responding. 12. Build in a hallucination check – "Did I make something up?" 13. Teach confirming questions – "Did I understand correctly?" 14. Set max response length – Forces clarity and focus. 15. Tell it to admit uncertainty – "I don't know" beats wrong answers. 16. Inject domain knowledge – Paste in your context/guidelines. 17. Add user handling rules – How to deal with frustrated users. 18. Define graceful "I don't know" – Better than guessing. 19. Specify tone & voice – Professional, friendly, casual—pick one. 20. Ask it to suggest next steps – Don't just solve, guide. 21. For customer service: Add brand voice – Keep consistency. 22. For sales agents: Define "qualified" – Who's a real lead? 23. For research: Require source verification – No made-up citations. 24. For code: Enforce quality standards – Clean, documented, tested. 25. Test worst-case scenarios first – Break it before users do. 📌 Why This Matters: A well-prompted agent handles 70-80% of work automatically. A badly prompted one wastes everyone's time. The difference? 30 minutes of thought upfront on your prompting strategy. Which of these 25 rules do you think your current AI agents are missing? Comment below, I'll share specific prompt templates for your use case. And if you're building agents, save this. You'll reference it constantly. ___________________________________________ 👋 I’m Amit Rawal, an AI practitioner and educator. Outside of work, I’m building SuperchargeLife.ai , a global movement to make AI education accessible and human-centered. ♻️ Repost if you believe AI isn’t about replacing us… It’s about retraining us to think better. Opinions expressed are my own in a personal capacity and do not represent the views, policies, or positions of my employer (currently Google LLC) or its subsidiaries or affiliates.

  • View profile for Kyle Poyar

    Growth Unhinged | Real-life growth insights, playbooks, and case studies

    107,220 followers

    AI products like Cursor, Bolt and Replit are shattering growth records not because they're "AI agents". Or because they've got impossibly small teams (although that's cool to see 👀). It's because they've mastered the user experience around AI, somehow balancing pro-like capabilities with B2C-like UI. This is product-led growth on steroids. Yaakov Carno tried the most viral AI products he could get his hands on. Here are the surprising patterns he found: (Don't miss the full breakdown in today's bonus Growth Unhinged: https://lnkd.in/ehk3rUTa) 1. Their AI doesn't feel like a black box. Pro-tips from the best: - Show step-by-step visibility into AI processes - Let users ask, “Why did AI do that?” - Use visual explanations to build trust. 2. Users don’t need better AI—they need better ways to talk to it. Pro-tips from the best: - Offer pre-built prompt templates to guide users. - Provide multiple interaction modes (guided, manual, hybrid). - Let AI suggest better inputs ("enhance prompt") before executing an action. 3. The AI works with you, not just for you. Pro-tips from the best: - Design AI tools to be interactive, not just output-driven. - Provide different modes for different types of collaboration. - Let users refine and iterate on AI results easily. 4. Let users see (& edit) the outcome before it's irreversible. Pro-tips from the best: - Allow users to test AI features before full commitment (many let you use it without even creating an account). - Provide preview or undo options before executing AI changes. - Offer exploratory onboarding experiences to build trust. 5. The AI weaves into your workflow, it doesn't interrupt it. Pro-tips from the best: - Provide simple accept/reject mechanisms for AI suggestions. - Design seamless transitions between AI interactions. - Prioritize the user’s context to avoid workflow disruptions. -- The TL;DR: Having "AI" isn’t the differentiator anymore—great UX is. Pardon the Sunday interruption & hope you enjoyed this post as much as I did 🙏 #ai #genai #ux #plg

  • View profile for Helen Mills

    Global Vice President, Chief Corporate Affairs and Sustainability Officer, Mars Petcare

    4,390 followers

    Are you finding exploring generative AI tools daunting? Sharing your successes – and stumbles – with others can help it feel less so. That’s why we gathered our global Mars Corporate Affairs function last week for the latest in our practical GenAI series, this time on a very important topic - improving the quality of Gen AI prompts. From adapting communication across channels or audience styles to team haikus, it was great to hear how our teams are already experimenting with these emerging tools creatively and, importantly, safely – and we rolled up our sleeves and tried different prompting techniques together on the call. I thought I'd share a few of our key takeaways as they may be useful for others:  * Prompt quality drives AI value: Crafting clear, specific prompts significantly improves AI output quality, reduces rewrites, and increases trust in results. Investing time in prompt creation upfront is a smart way to maximize efficiency.  * There are different advanced prompting techniques: We learned about shot-based prompting (zero, one, few-shot), chain-of-thought prompting (breaking down complex tasks), and prompt-priming (setting context and tone at the start) to enhance AI performance. * Consider a ‘prompt library’: There’s an art and science to developing great prompts. Consider banking reusable prompts across teams to save time and share best practices.  * Troubleshooting: Expect issues like hallucinated data, token limits and slow responses. Consider providing ‘escape routes’ in prompts (e.g. instructing the AI to say "I don't know" if unsure).  * Last but not least, keep the human in the loop: Today AI should augment, not replace, human judgment to review, refine, and validate AI outputs for accuracy, bias, and ethical considerations. Prompting by nature is an iterative process - it's normal not to get the perfect output on the first try; iterating and refining prompts through conversation with the AI leads to better results. But our best tip by far – just get stuck in.  Experimenting and sharing your learnings (in accordance with your company's safe Gen AI guidelines) is the best way to build these new muscles more quickly. Got a favourite prompt? Or other great tips in building capabilities in this area, I’d love to hear it. Big thanks to Camilla Vasquez, Katherine Horrocks, Ishtar Schneider and many others for being a driving force in helping to build our capabilities in this important area. #GenAI #CorporateAffairs

  • View profile for Amit Kumar Soni

    Founder & CEO Mindacks.ai | AI Governance & Risk · Human-Centred AI Institute · Agentic AI Deployment | PhD Researcher (Responsible AI & Neuroscience) | Ex-PepsiCo Global Head | 60K+ Trained | ICF PCC

    31,946 followers

    Everyone is asking: “Which AI tool should I use?” Wrong question. The real question is: 𝗪𝗵𝗮𝘁 𝗮𝗿𝗲 𝘆𝗼𝘂 𝘁𝗿𝘆𝗶𝗻𝗴 𝘁𝗼 𝗮𝗰𝗵𝗶𝗲𝘃𝗲? Because tools don’t solve problems. 𝗖𝗹𝗮𝗿𝗶𝘁𝘆 𝗱𝗼𝗲𝘀. Here’s the simple framework most people miss: 1. Start with your goal Ask yourself:  • Do I need to create, learn, analyze, or decide?  • Do I want speed or depth?  • Is this a one-time task or a repeatable workflow? If this is unclear, every AI tool will feel average. 2. Ask better questions Most people prompt like this: “Help me with this” Top performers prompt like this:  • Break this into steps  • Give me 3 options with trade-offs  • What am I missing?  • Challenge my thinking Your output is only as good as your input. 3. Then choose the tool  • ChatGPT → thinking, structuring, problem-solving  • Gemini → Google ecosystem workflows  • Claude → writing, nuance, long-form content  • Perplexity → research and fact-checking  • Copilot → Excel, PowerPoint, enterprise tasks  • Grok → real-time insights No tool is “best” Each tool is context-specific. 4. Build a simple system Stop using AI randomly. Use this structure: Context → Goal → Constraints → Output Example: “I’m preparing a 10-slide strategy deck for senior leaders. Give me slide titles and key points. Keep it concise.” This is where results change. 5. Combine tools like an operator  • Think with ChatGPT  • Refine with Claude  • Verify with Perplexity  • Execute with Copilot You can find Free 18 tools you can start using AI today from this post https://lnkd.in/gX47sUT4 That’s the difference between using AI and leveraging it. The shift is simple: Amateurs ask: “Which tool is best?” Professionals ask: “How do I think better with these tools?” Follow for more practical AI frameworks that actually work.

  • View profile for Matt Palmer

    Developer Experience at Conductor

    18,636 followers

    Whether you're using Replit Agent, Assistant, or other AI tools, clear communication is key. Effective prompting isn't magic; it's about structure, clarity, and iteration. Here are 10 principles to guide your AI interactions: 🔹 Checkpoint: Build iteratively. Break down large tasks into smaller, testable steps and save progress often. 🔹 Debug: Provide detailed context for errors – error messages, code snippets, and what you've tried. 🔹 Discover: Ask the AI for suggestions on tools, libraries, or approaches. Leverage its knowledge base. 🔹 Experiment: Treat prompting as iterative. Refine your requests based on the AI's responses. 🔹 Instruct: State clear, positive goals. Tell the AI what to do, not just what to avoid. 🔹 Select: Provide focused context. Use file mentions or specific snippets; avoid overwhelming the AI. 🔹 Show: Reduce ambiguity with concrete examples – code samples, desired outputs, data formats, or mockups. 🔹 Simplify: Use clear, direct language. Break down complexity and avoid jargon. 🔹 Specify: Define exact requirements – expected outputs, constraints, data formats, edge cases. 🔹 Test: Plan your structure and features before prompting. Outline requirements like a PM/engineer. By applying these principles, you can significantly improve your collaboration with AI, leading to faster development cycles and better outcomes.

  • View profile for Janet Perez (PHR, Prosci, DiSC)

    Head of Learning & Development | AI for Workforce Transformation | Shaping the Future of Work & Work Optimization

    8,742 followers

    Somebody has to say it: some AI tools are causing more harm than good. Not because the technology is bad. Not because people are resisting change. But because we keep rolling out tools without guidance, training, or context and calling it “innovation.” When employees are expected to figure it out on their own, confusion replaces confidence. Work slows down. Trust erodes. AI at work doesn’t fail loudly. It quietly creates friction when enablement is missing. If we want better outcomes, we have to design for adoption, not just deployment. If you’re rolling out AI at work and want it to actually help, here’s a simple place to start: 1. Start with the “why,” not the tool ✅ Be clear about the problem AI is meant to solve. Productivity, quality, speed, decision-making. If people don’t understand the purpose, they won’t trust the tool. 2. Define when and when not to use it ✅ Ambiguity creates hesitation. Give real examples of appropriate use cases and clear boundaries so employees aren’t guessing. 3. Train for workflows, not features ✅ Skip the generic demos. Show how the tool fits into existing day-to-day work, step by step. 4. Equip managers first ✅ If managers can’t explain or model usage, adoption stalls. Enable leaders before expecting teams to follow. 5. Build feedback loops early ✅ Create space for questions, friction, and adjustments. Early feedback prevents quiet frustration from turning into resistance. 6. Treat adoption as ongoing, not a launch event ✅ AI enablement isn’t a one-time rollout. It’s reinforcement, iteration, and support over time. AI works best when people feel prepared, not pressured. ——— ✦ ——— 🌱 More on AI + Workforce Development → Janet Perez

  • View profile for Stephen Smith

    I Help Law Firms Turn AI Into Billable Outcomes | 3,000+ Attorneys Trained | Founder & CEO, Intelligence by Intent | CLE Keynote Speaker

    2,881 followers

    I'm delivering an in-person AI training session tomorrow for a 250-person company (I love doing events like these). One of the things I always share with my clients: "5 things to remember about using AI" (especially if you are a law firm, PE firm, valuation firm, account firm, etc). Figured I'd share them here too. 1. Always Verify, Never Trust Blindly: LLMs can hallucinate, make mistakes, and present wrong information confidently. Think "trust but verify" for everything important. AI is a powerful first-draft tool, not a fact-checker. 2. Context is Currency: The more specific context you provide (background, audience, goals, examples, constraints), the better your results. Vague prompts get vague outputs. Detailed prompts get valuable outputs. The quality of what you get out is directly proportional to what you put in. 3. Iterate: Don't Expect Perfection on Round One: AI is a collaborative partner, not a magic button. Your first result is a starting point. Refine it, redirect it, build on it. The best outputs come from conversation, not single prompts. Think of it as working with the AI, not just asking it to do things for you. 4. Humans Make the Final Call: AI should amplify human judgment, not replace it. You need human oversight for: quality, accuracy, appropriateness, strategic decisions, and anything customer-facing. AI does the heavy lifting; humans do the final polish and approval. 5. The Right Tool for the Right Job: Not every task needs AI, and not every AI tool fits every task. Use reasoning models for complex problems, fast models for simple tasks. Sometimes a spreadsheet formula or a quick phone call is still the better solution. Don't force AI where it doesn't add value.

  • I've been going deep with AI as we prepare to release new Bonterms Standards. The warning 'this model can make mistakes' is hard-coded in the interface of LLMs for a reason. Here are some tips to help mitigate the risk. 𝗧𝗶𝗽𝘀 𝗳𝗼𝗿 𝗲𝗳𝗳𝗲𝗰𝘁𝗶𝘃𝗲 𝘂𝘀𝗲 𝗼𝗳 𝗔𝗜 𝗶𝗻 𝗰𝗼𝗻𝘁𝗿𝗮𝗰𝘁 𝗱𝗿𝗮𝗳𝘁𝗶𝗻𝗴: 𝟭. 𝗥𝗼𝘁𝗮𝘁𝗲 𝗲𝗮𝗰𝗵 𝘁𝗮𝘀𝗸 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 𝗺𝘂𝗹𝘁𝗶𝗽𝗹𝗲 𝗺𝗼𝗱𝗲𝗹𝘀. Paste the same prompts into Anthropic, ChatGPT and Gemini as you go. Models change constantly, even from one session to the next, so three perspectives help with quality control and completeness. 𝟮. 𝗪𝗼𝗿𝗸 𝘀𝗲𝗰𝘁𝗶𝗼𝗻 𝗯𝘆 𝘀𝗲𝗰𝘁𝗶𝗼𝗻. AI struggles with how provisions interconnect. A definition of "Confidential Information" may undercut a carefully constructed damages cap for data breach, but AI won't notice. Section-by-section review keeps both you and the AI focused on getting each piece right before moving to the next. 𝟯. 𝗦𝗲𝘁 𝘁𝗵𝗲 𝗽𝗮𝗰𝗲. The chat format and AI's tendency to assume you're in a rush work against the flow state you need when working through long, complex documents. Set your own pace. 𝟰. 𝗞𝗲𝗲𝗽 𝘁𝗵𝗲 𝗰𝗼𝗻𝘁𝗲𝘅𝘁 𝘄𝗶𝗻𝗱𝗼𝘄 𝘀𝗺𝗮𝗹𝗹. Upload the agreement you're working on, but trim unnecessary exhibits or comparison examples. AI processes text statistically and can get overwhelmed by the noise. Rotate reference materials in and out rather than dumping everything at once. 𝟱. 𝗕𝗲 𝗮𝗰𝘁𝗶𝘃𝗲 𝗮𝗻𝗱 𝗶𝘁𝗲𝗿𝗮𝘁𝗶𝘃𝗲 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗽𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴. Iterate as you go rather than relying on a single master prompt. Give context to fix misunderstandings and stay focused on what is coming up in the analysis, instead of front-loading long rule lists. 𝟲. 𝗕𝗲 𝗰𝗿𝗲𝗮𝘁𝗶𝘃𝗲 𝗶𝗻 𝘆𝗼𝘂𝗿 𝗽𝗿𝗼𝗺𝗽𝘁𝗶𝗻𝗴. Assign a "red-team" role to act as opposing counsel, probing for ambiguities and vulnerabilities. Run a trace through critical mechanisms (warranty → indemnity → limitation of liability). Ask for targeted audits of definitions, cross-references, and surviving sections. This is where AI really shines: you can run any check or test you think of and get hits you might not otherwise see. 𝟳. 𝗖𝗼𝗻𝘁𝗶𝗻𝘂𝗮𝗹𝗹𝘆 𝗿𝗲𝗺𝗶𝗻𝗱 𝘆𝗼𝘂𝗿𝘀𝗲𝗹𝗳 𝘁𝗵𝗮𝘁 𝗔𝗜 𝗰𝗮𝗻𝗻𝗼𝘁 𝘁𝗵𝗶𝗻𝗸 𝗼𝗿 𝗿𝗲𝗮𝘀𝗼𝗻. We don't yet have good language for what AI is doing, but it's not thinking. You're flinging ideas against a math table trained to sound helpful and authoritative. But obsequiousness is the last trait you'd want in a thinking partner. Don't let it decide issues for you. 𝟴. 𝗨𝘀𝗲 𝗣𝗗𝗙𝘀. Automatic numbering in Word documents baffles AI. Just upload the PDF. 𝟵. 𝗦𝗵𝗼𝘄 𝗶𝘁 𝘁𝗼 𝗮𝗻𝗼𝘁𝗵𝗲𝗿 𝗵𝘂𝗺𝗮𝗻. At my old firm, everything needed two sets of eyes before it went out: guidance emails, checklists, agreement drafts. No exceptions. Now I get to show every draft to 120 lawyers on the Bonterms Committee. AI is not a colleague or a replacement for one. Always get another human to review before you hit send.

Explore categories