Most people evaluate LLMs by just benchmarks. But in production, the real question is- how well do they perform? When you’re running inference at scale, these are the 3 performance metrics that matter most: 1️⃣ Latency How fast does the model respond after receiving a prompt? There are two kinds to care about: → First-token latency: Time to start generating a response → End-to-end latency: Time to generate the full response Latency directly impacts UX for chat, speed for agentic workflows, and runtime cost for batch jobs. Even small delays add up fast at scale. 2️⃣ Context Window How much information can the model remember- both from the prompt and prior turns? This affects long-form summarization, RAG, and agent memory. Models range from: → GPT-3.5 / LLaMA 2: 4k–8k tokens → GPT-4 / Claude 2: 32k–200k tokens → GPT-OSS-120B: 131k tokens Larger context enables richer workflows but comes with tradeoffs: slower inference and higher compute cost. Use compression techniques like attention sink or sliding windows to get more out of your context window. 3️⃣ Throughput How many tokens or requests can the model handle per second? This is key when you’re serving thousands of requests or processing large document batches. Higher throughput = faster completion and lower cost. How to optimize based on your use case: → Real-time chat or tool use → prioritize low latency → Long documents or RAG → prioritize large context window → Agentic workflows → find a balance between latency and context → Async or high-volume processing → prioritize high throughput My 2 cents 🤌 → Choose in-region, lightweight models for lower latency → Use 32k+ context models only when necessary → Mix long-context models with fast first-token latency for agents → Optimize batch size and decoding strategy to maximize throughput Don’t just pick a model based on benchmarks. Pick the right tradeoffs for your workload. 〰️〰️〰️ Follow me (Aishwarya Srinivasan) for more AI insight and subscribe to my Substack to find more in-depth blogs and weekly updates in AI: https://lnkd.in/dpBNr6Jg
How to Test OSS/BSS LLM Performance
Explore top LinkedIn content from expert professionals.
Summary
Testing OSS/BSS LLM performance means evaluating how well large language models function within telecom operations and business support systems, focusing on speed, accuracy, and reliability. Instead of relying on simple benchmarks, real-world testing measures how these models handle tasks like chat, document summarization, and workflow automation at scale.
- Track latency metrics: Measure how quickly the model starts responding and completes tasks, as both impact user satisfaction and system costs.
- Measure throughput and goodput: Count how many requests or tokens the model processes per second, and check what percentage meet service standards for real-world reliability.
- Evaluate context and task success: Test how much information the model can remember and whether it completes tasks accurately and efficiently, especially for complex workflows.
-
-
How do you measure LLM inference performance in the real world? Most people stop at “tokens per second.” In production, that’s not enough. Here are the metrics that truly matter: • Time to First Token (TTFT): How long before the first response shows up. Under 200ms feels seamless. Anything above 2s loses users. • Time Per Output Token (TPOT): Defines smoothness of streaming. ~4 tokens/sec matches human reading speed. Below 2 feels slow, above 8 adds little value. • Token Generation Time: The time from first to final token. Crucial for long-form responses and research-heavy use cases. • Total Latency (E2EL): From sending the request to the last token. Formula: TTFT + Token Generation Time. • P50 vs P99 Latency: Median vs worst-case. Leaders should care about the tail because that’s what frustrates customers. • Requests Per Second (RPS) vs Tokens Per Second (TPS): RPS = conversations handled. TPS = tokens generated. Context matters. • Goodput (not just throughput): What percentage of requests actually meet SLAs. 1000 TPS with 20% timeouts? Real goodput is only 800 TPS. • Throughput vs Latency Trade-off: Bigger batches = higher throughput, slower per-user. Smaller batches = faster responses, lower overall throughput. You can’t maximize both. Leaders: Think in terms of user experience (TTFT, P99 latency). Practitioners: Think in terms of system efficiency (RPS, Goodput). The real skill? Picking the right metric for the right use case—and defending that tradeoff.
-
LLM applications are frustratingly difficult to test due to their probabilistic nature. However, testing is crucial for customer-facing applications to ensure the reliability of generated answers. So, how does one effectively test an LLM app? Enter Confident AI's DeepEval: a comprehensive open-source LLM evaluation framework with excellent developer experience. Key features of DeepEval: - Ease of use: Very similar to writing unit tests with pytest. - Comprehensive suite of metrics: 14+ research-backed metrics for relevancy, hallucination, etc., including label-less standard metrics, which can quantify your bot's performance even without labeled ground truth! All you need is input and output from the bot. See the list of metrics and required data in the image below! - Custom Metrics: Tailor your evaluation process by defining your custom metrics as your business requires. - Synthetic data generator: Create an evaluation dataset synthetically to bootstrap your tests My recommendations for LLM evaluation: - Use OpenAI GPT4 as the metric model as much as possible. - Test Dataset Generation: Use the DeepEval Synthesizer to generate a comprehensive set of realistic questions! Bulk Evaluation: If you are running multiple metrics on multiple questions, generate the responses once, store them in a pandas data frame, and calculate all the metrics in bulk with parallelization. - Quantify hallucination: I love the faithfulness metric, which indicates how much of the generated output is factually consistent with the context provided by the retriever in RAG! CI/CD: Run these tests automatically in your CI/CD pipeline to ensure every code change and prompt change doesn't break anything. - Guardrails: Some high-speed tests can be run on every API call in a post-processor before responding to the user. Leave the slower tests for CI/CD. 🌟 DeepEval GitHub: https://lnkd.in/g9VzqPqZ 🔗 DeepEval Bulk evaluation: https://lnkd.in/g8DQ9JAh Let me know in the comments if you have other ways to test LLM output systematically! Follow me for more tips on building successful ML and LLM products! Medium: https://lnkd.in/g2jAJn5 X: https://lnkd.in/g_JbKEkM #generativeai #llm #nlp #artificialintelligence #mlops #llmops
-
I've spent countless hours building and evaluating AI systems. This is the 3-part evaluation roadmap I wish I had on day one. Evaluating an LLM system isn't one task. It's about measuring the performance of each component in the pipeline. You don't just test "the AI"; You test the retrieval, the generation, and the overall agentic workflow. 𝗣𝗮𝗿𝘁 𝟭: 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗻𝗴 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 (𝗧𝗵𝗲 𝗥𝗔𝗚 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲) Your system is only as good as the context it retrieves. 𝗞𝗲𝘆 𝗠𝗲𝘁𝗿𝗶𝗰𝘀: ↳ 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗣𝗿𝗲𝗰𝗶𝘀𝗶𝗼𝗻: How much of the retrieved context is actually relevant vs. noise? ↳ 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 𝗥𝗲𝗰𝗮𝗹𝗹: Did you retrieve all the necessary information to answer the query? ↳ 𝗡𝗗𝗖𝗚: How high up in the retrieved list are the most relevant documents? 𝗞𝗲𝘆 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀: ↳ 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸: RAGAs Framework (Repo) https://lnkd.in/gAPdCRzh ↳ 𝗣𝗮𝗽𝗲𝗿: RAGAs Paper https://lnkd.in/gUKVe4ac 𝗣𝗮𝗿𝘁 𝟮: 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗻𝗴 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 (𝗧𝗵𝗲 𝗟𝗟𝗠'𝘀 𝗥𝗲𝘀𝗽𝗼𝗻𝘀𝗲) Once you have the context, how good is the model's actual output? 𝗞𝗲𝘆 𝗠𝗲𝘁𝗿𝗶𝗰𝘀: ↳ 𝗙𝗮𝗶𝘁𝗵𝗳𝘂𝗹𝗻𝗲𝘀𝘀: Does the answer stay grounded in the provided context, or does it start to hallucinate? ↳ 𝗥𝗲𝗹𝗲𝘃𝗮𝗻𝗰𝗲: Is the answer directly addressing the user's original prompt? ↳ 𝗜𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻 𝗙𝗼𝗹𝗹𝗼𝘄𝗶𝗻𝗴: Did the model adhere to the output format you requested? 𝗞𝗲𝘆 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀: ↳ 𝗧𝗲𝗰𝗵𝗻𝗶𝗾𝘂𝗲: LLM-as-Judge Paper https://lnkd.in/gyhaU5CC ↳ 𝗙𝗿𝗮𝗺𝗲𝘄𝗼𝗿𝗸𝘀: OpenAI Evals & LangChain Evals https://lnkd.in/g9rjmfGS https://lnkd.in/gmJt7ZBa 𝗣𝗮𝗿𝘁 𝟯: 𝗘𝘃𝗮𝗹𝘂𝗮𝘁𝗶𝗻𝗴 𝘁𝗵𝗲 𝗔𝗴𝗲𝗻𝘁 (𝗧𝗵𝗲 𝗘𝗻𝗱-𝘁𝗼-𝗘𝗻𝗱 𝗦𝘆𝘀𝘁𝗲𝗺) Does the system actually accomplish the task from start to finish? 𝗞𝗲𝘆 𝗠𝗲𝘁𝗿𝗶𝗰𝘀: ↳ 𝗧𝗮𝘀𝗸 𝗖𝗼𝗺𝗽𝗹𝗲𝘁𝗶𝗼𝗻 𝗥𝗮𝘁𝗲: Did the agent successfully achieve its final goal? This is your north star. ↳ 𝗧𝗼𝗼𝗹 𝗨𝘀𝗮𝗴𝗲 𝗔𝗰𝗰𝘂𝗿𝗮𝗰𝘆: Did it call the correct tools with the correct arguments? ↳ 𝗖𝗼𝘀𝘁/𝗟𝗮𝘁𝗲𝗻𝗰𝘆 𝗽𝗲𝗿 𝗧𝗮𝘀𝗸: How many tokens and how much time did it take to complete the task? 𝗞𝗲𝘆 𝗥𝗲𝘀𝗼𝘂𝗿𝗰𝗲𝘀: ↳ 𝗚𝗼𝗼𝗴𝗹𝗲'𝘀 𝗔𝗗𝗞 𝗗𝗼𝗰𝘀: https://lnkd.in/g2TpCWsq ↳ 𝗗𝗲𝗲𝗽𝗟𝗲𝗮𝗿𝗻𝗶𝗻𝗴(.)𝗔𝗜 𝗔𝗴𝗲𝗻𝘁𝘀 𝗘𝘃𝗮𝗹 𝗖𝗼𝘂𝗿𝘀𝗲: https://lnkd.in/gcY8WyjV Stop testing your AI like a monolith. Start evaluating the components like a systems engineer. That's how you build systems that you can actually trust. Save this roadmap. What's the hardest part of your current eval pipeline? ♻️ Repost this to help your network build better systems. ➕ Follow Shivani Virdi for more.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development