Testing Approaches for High-Risk and Low-Risk Code

Explore top LinkedIn content from expert professionals.

Summary

Testing approaches for high-risk and low-risk code help teams decide how much scrutiny and caution to use when releasing new software, based on how much damage a bug could cause. High-risk code—like parts that impact security or money—needs more careful testing, while low-risk code can be checked more quickly and with simpler methods.

  • Assess risk level: Always start by identifying which parts of your code could lead to serious problems and which are less critical, so you can focus your testing efforts appropriately.
  • Use staged rollouts: Try launching high-risk features to only a small group of users first and monitor closely to spot issues before a wider release.
  • Prioritize test types: Invest more time in in-depth, automated, and scripted tests for high-risk code, while using faster, manual or exploratory checks for code that poses minimal risk.
Summarized by AI based on LinkedIn member posts
  • View profile for Jyotirmay Samanta

    ex Google, ex Amazon, CEO at BinaryFolks | Applied AI | Custom Software | Product Development

    17,951 followers

    Circa 2012-14, at a FAANG company (can’t pin-point for obvious reason 😉), we once faced a choice that could have cost MILLIONS in downtime… 𝐇𝐞𝐫𝐞’𝐬 𝐰𝐡𝐚𝐭 𝐰𝐞 𝐝𝐢𝐝. A critical system update was set to go live. Everything was tested, reviewed, and ready. Until a last-minute test showed an unusual error. 𝐍𝐨𝐰 𝐰𝐞 𝐡𝐚𝐝 𝐭𝐰𝐨 𝐨𝐩𝐭𝐢𝐨𝐧𝐬: ↳ Push ahead and risk an outage that could cost millions per minute. ↳ Roll back and delay a major feature for weeks. 𝐍𝐞𝐢𝐭𝐡𝐞𝐫 𝐟𝐞𝐥𝐭 𝐫𝐢𝐠𝐡𝐭. So we took a smarter approach. 𝐇𝐞𝐫𝐞’𝐬 𝐰𝐡𝐚𝐭 𝐰𝐞 𝐝𝐢𝐝: ➡️ 1. Instead of an all-or-nothing launch, we released to 0.1% of our traffic first. If things went sideways, we could shut it down in real time. ➡️ 2. Pre-prod tests only catch what they’re designed to catch—but production is unpredictable. We used synthetic traffic to simulate real-user behavior in a controlled environment. ➡️ 3. We didn’t just have one rollback plan — 𝐰𝐞 𝐡𝐚𝐝 𝐭𝐡𝐫𝐞𝐞: App-layer toggle – Immediate rollback for end-user impact. Traffic rerouting – Redirecting requests to stable older versions if needed. DB versioning – Avoiding schema lock-in with backwards-compatible updates. ➡️ 4. We set up live telemetry dashboards tracking error rates, latencies, and key business metrics—so we weren’t reacting blindly. ➡️ 5. Before the rollout, we ran a “what-if” drill: If this update fails, how will it fail? This helped us build mitigation paths before they were needed. 𝐖𝐡𝐚𝐭 𝐇𝐚𝐩𝐩𝐞𝐧𝐞𝐝? The anomaly we caught in testing never materialized in production. If we had rolled back, we’d have wasted weeks fixing a non-issue. Most teams still launch software with an “all or nothing” mindset. But controlled rollouts, kill switches, and real-time observability can let you ship fast and safe—without breaking everything. How does your team handle high-risk deployments? Would love to hear that 🙂

  • View profile for Sreejith Kanhirangadan

    I Help Pharma & Biotech Teams Automate CSV & Cut Validation Bottlenecks | Founder @ EVOLV | 10k+ Trained| 11x Course Creator

    6,802 followers

    𝘏𝘰𝘸 𝘥𝘰 𝘳𝘦𝘨𝘶𝘭𝘢𝘵𝘦𝘥 𝘤𝘰𝘮𝘱𝘢𝘯𝘪𝘦𝘴 𝘦𝘯𝘴𝘶𝘳𝘦 𝘤𝘰𝘮𝘱𝘭𝘪𝘢𝘯𝘵 𝘷𝘢𝘭𝘪𝘥𝘢𝘵𝘪𝘰𝘯? 𝘒𝘦𝘺 𝘪𝘯𝘪𝘵𝘪𝘢𝘭 𝘴𝘵𝘦𝘱𝘴. Amid debates over CSV, CSA, and their relevance, for young CSV practitioners and startups in med device, pharma, and biotech, embracing new technologies is crucial for success. Just as the iPhone continually upgrades while maintaining its core use, these users and industries must innovate to achieve greater outcomes and efficiencies. So today, we will discuss the initial thoughts a regulated company and its key stakeholders (BO, SO, CSV, QA, Sponsor, etc.) should have when planning to buy a vendor product that may need to go through GxP Computer System Validation (CSV). Here's an initial checklist with sample yes/no statuses: ( add yours in comments) ✔️CSV Procedure: Yes ✔️Testing or Lifecycle Management Software Tool: No ✔️Vendor Assessment: Completed and Accepted ✔️High-Level Requirements: Available ✔️Intended Use defined: Yes ✔️Key Stakeholders identified: BO, SO, CSV, QA, PM ✔️Vendor Solution Meeting Requirements: 75% (25% configuration needed) ✔️Vendor Trustworthiness: High ✔️Vendor Sharing Testing and Verification Documents: Yes 𝗦𝘁𝗲𝗽𝘀 𝘁𝗼 𝗩𝗮𝗹𝗶𝗱𝗮𝘁𝗲: ̲𝟷̲.̲ ̲𝚁̲𝚒̲𝚜̲𝚔̲ ̲𝙰̲𝚜̲𝚜̲𝚎̲𝚜̲𝚜̲𝚖̲𝚎̲𝚗̲𝚝̲:̲ • Classify requirements into High, Medium, and Low risk. • Evaluate how much of each risk category is met Out-Of-The-Box (OOB) by the vendor software. ̲𝟸̲.̲ ̲𝙵̲𝚘̲𝚌̲𝚞̲𝚜̲ ̲𝚘̲𝚗̲ ̲𝙷̲𝚒̲𝚐̲𝚑̲–̲𝚁̲𝚒̲𝚜̲𝚔̲ ̲𝚁̲𝚎̲𝚚̲𝚞̲𝚒̲𝚛̲𝚎̲𝚖̲𝚎̲𝚗̲𝚝̲𝚜̲:̲ • 50% of high-risk requirements are met OOB. • Develop and execute scripted test cases for the remaining 50%. ̲𝟹̲.̲ ̲𝚃̲𝚎̲𝚜̲𝚝̲𝚒̲𝚗̲𝚐̲ ̲𝚂̲𝚝̲𝚛̲𝚊̲𝚝̲𝚎̲𝚐̲𝚢̲:̲ • Use scripted or unscripted verification for Medium and Low-risk requirements. • Prioritize testing frequently updated OOB features to prevent bugs from reaching production. 𝟺̲.̲ ̲𝙸̲𝚖̲𝚙̲𝚕̲𝚎̲𝚖̲𝚎̲𝚗̲𝚝̲𝚊̲𝚝̲𝚒̲𝚘̲𝚗̲ ̲𝙴̲𝚡̲𝚙̲𝚎̲𝚛̲𝚒̲𝚎̲𝚗̲𝚌̲𝚎̲:̲ • Prioritize implementing mature modules like Veeva QMS first for reliability. 𝟻̲.̲ ̲𝙲̲𝚘̲𝚗̲𝚏̲𝚒̲𝚐̲𝚞̲𝚛̲𝚊̲𝚝̲𝚒̲𝚘̲𝚗̲ ̲𝚃̲𝚎̲𝚜̲𝚝̲𝚒̲𝚗̲𝚐̲:̲ • Focus on testing configurations made to meet specific requirements, including functional, regulatory, security, and data integrity aspects. ̲𝟼̲.̲ ̲𝙴̲𝚖̲𝚙̲𝚘̲𝚠̲𝚎̲𝚛̲𝚒̲𝚗̲𝚐̲ ̲𝚃̲𝚎̲𝚜̲𝚝̲𝚎̲𝚛̲𝚜̲:̲ • Encourage testers to thoroughly test and identify bugs early in the development cycle. By following these initial steps, you can ensure a 10x better start to a smooth and compliant validation process for your vendor product. 𝘛𝘰𝘮𝘰𝘳𝘳𝘰𝘸: 𝘞𝘩𝘢𝘵 𝘢𝘳𝘦 𝘵𝘩𝘦 𝘷𝘢𝘭𝘪𝘥𝘢𝘵𝘪𝘰𝘯 𝘴𝘵𝘦𝘱𝘴 𝘢𝘯𝘥 𝘥𝘰𝘤𝘶𝘮𝘦𝘯𝘵𝘢𝘵𝘪𝘰𝘯 𝘳𝘦𝘲𝘶𝘪𝘳𝘦𝘮𝘦𝘯𝘵𝘴? #CSV #CSA #biotech #pharma #meddevice #AI

  • View profile for Sagar Navroop

    Multi-Cloud Data Architect | AI | SIEM | Observability

    3,718 followers

    Can AI Deployments Achieve 98.9% Uptime? In machine learning (ML) operations, deploying updates safely without disrupting user experience is key. Two popular approaches—Shadow Testing and Blue-Green Deployment—help ensure smooth transitions while keeping uptime high. 𝐒𝐡𝐚𝐝𝐨𝐰 𝐓𝐞𝐬𝐭𝐢𝐧𝐠 lets a new AI model run invisibly alongside the current one, processing real data but without user impact. This approach allows teams to compare predictions and fine-tune performance without interruptions. Imagine a new chef testing recipes in the background; feedback is gathered, but customers aren’t affected. 𝐁𝐥𝐮𝐞-𝐆𝐫𝐞𝐞𝐧 𝐃𝐞𝐩𝐥𝐨𝐲𝐦𝐞𝐧𝐭 takes a gradual traffic approach, starting with a small slice (often 10%) directed to the new model (green environment). This controlled rollout, similar to 𝐜𝐚𝐧𝐚𝐫𝐲 testing, allows teams to monitor results and catch issues early. An application Load Balancer (𝐀𝐋𝐁 ) is used with weighted traffic routing, both environments (blue and green) are live and actively handle traffic, but with different volumes. As testing completes and confidence is built, more traffic shifts to the green environment until it reaches 100%. Picture it as opening a new restaurant with a soft launch, welcoming more guests as operations perfect. Both methods are powerful for AI—Shadow Testing provides silent, risk-free feedback, while Blue-Green offers a safe, monitored rollout—ensuring reliability and up to 98.9% uptime. 𝐖𝐡𝐞𝐧 𝐭𝐨 𝐮𝐬𝐞 ? Use Shadow Testing when you need to compare a new model's predictions directly with the current one in real-time, without affecting users. It’s ideal for testing high-risk models requiring small incremental changes. Go with Blue-Green Deployment when you are confident in the new model and want a phased, monitored rollout. Works best for high-volume updates. If on AWS; use the following tools: 𝐒𝐚𝐠𝐞𝐌𝐚𝐤𝐞𝐫 : Ideal for shadow testing with models, enabling you to test new models alongside current ones without impacting users. 𝐂𝐨𝐝𝐞𝐃𝐞𝐩𝐥𝐨𝐲 : Supports Blue-Green Deployments by gradually shifting traffic between old and new versions, perfect for applications and ML models. 𝐀𝐩𝐩 𝐌𝐞𝐬𝐡 : Manages traffic routing for shadow or canary testing, allowing fine-grained control over service interactions in microservices. #mlupdatestrategies #twominutedigest

  • View profile for Ivan Barajas Vargas

    Forward-Deployed CEO | Building Thoughtful Testing Systems for Companies and Testers | Co-Founder @ MuukTest (Techstars ’20)

    12,159 followers

    In an ideal world, we’d get instant feedback on software quality the moment a line of code is written (by AI or humans) (we’re working hard to build that world, but in the meantime); how do we BALANCE speed to market with the right level of testing? Here are 6 tips to approach it: 1 - Assess your risk tolerance: Risk and user patience are variable. A fintech app handling transactions can’t afford the same level of defects as a social app with high engagement and few alternatives. Align your testing strategy with the actual cost of failure. 2 - Define your “critical path”: Not all features are created equal. Identify the workflows that impact revenue, security, or retention the most; these deserve the highest testing rigor. 3 - Automate what matters: Automated tests provide confidence without slowing you down. Prioritize unit and integration tests for core functionality and use end-to-end tests strategically. 4 - Leverage environment tiers: Move fast in lower environments but enforce stability in staging and production. 5 - Shift Left: Catching defects earlier saves time and cost. Embed testing at the pull request, commit, and review stages to reduce late-stage surprises. 6 - Timebox your testing: Not every feature needs exhaustive QA. Set clear limits based on risk, business impact, and development speed to avoid getting stuck in endless validation cycles. The goal is to move FAST WITHOUT shipping avoidable FIRES. Prioritization, intelligent automation, and risk-based decision-making will help you release with confidence (until we reach a future where testing is instant and invisible). Any other tips?

Explore categories