Good engineering is wasted if you build the wrong product. The other day, I meet a founder. He says “Oh, you’re a CTO?” He hands me his phone. “Can you look at my app? I'm not sure my engineering team did a good job.“ I say “it’s hard to be sure just by clicking around, but the layout seems fine, the performance is snappy. What’s wrong with it?” “Well, people aren’t using it enough” Ah, the plot thickens. As it happens, the engineering team is doing fine. But they’re contractors. They’re given Figma mocks and do a pixel-perfect implementation. But how do those mocks get created? They’re just following an arbitrary roadmap based on the founder’s intuition. Having strong intuition for what your users want is helpful, but it never happens in a vacuum. Your job as a founder is to talk to your users. A lot. When you all you have is a wireframe, show your users and look for validation that it meets a real need they’d be willing to pay for. When you have a higher-fidelity prototype, do it again. Summarize, and share these summaries with your engineers. Everyone who touches execution should be reading them. Once you’ve launched, mine insights from your monitoring tools. Do new features improve these metrics? If early testers aren’t engaging, ask why. Always assume you’re missing some key insight about user needs and be relentless in squeezing this insight from your users. Until you have product-market fit, the most valuable thing your users have for you isn’t their money, its their honest feedback. Getting this feedback isn’t easy, but it’s the shortest path to iterating on your product effectively. If you’re not doing this, you’re likely wasting precious time and engineering resources. 10 hours of talking to users saves you 100s (1000s?) of hours building the wrong thing.
Value of Early Concept Testing in Engineering
Explore top LinkedIn content from expert professionals.
Summary
Early concept testing in engineering means evaluating ideas and prototypes before investing significant time or resources in building a product. This process helps teams discover user needs, identify potential problems, and avoid costly mistakes by making changes when they’re still easy and inexpensive.
- Engage real users: Share rough sketches or early prototypes with potential users and gather honest feedback to ensure you’re solving the right problem.
- Observe actual behavior: Instead of asking if people like an idea, watch how they interact with it to see if they would use or pay for your solution in a real-world context.
- Test early, adjust quickly: Validate your riskiest assumptions at the start so you can make course corrections before committing large amounts of time or money to development.
-
-
One of the most common mistakes teams make when evaluating early product features is asking users whether they like an idea and treating the answer as evidence. Decades of behavioral research and very practical product research work show that this is a weak signal. People are generally bad at predicting what they will use, adopt, or pay for in the future, especially when there is no cost, effort, or tradeoff attached to their answer. That is why early feature evaluation should focus on behavior rather than belief. When a feature is only a concept, a smoke test can already tell you a lot. Exposing users to the idea through a landing page, announcement, or waitlist and observing whether they click or sign up answers a very specific question. Is this worth building at all, not whether it sounds good in theory. When an idea becomes clickable, fake door tests bring the decision closer to real behavior. Placing a realistic entry point inside the product and observing who actually tries to use it shows intent in context. The power of this method comes from the fact that users believe the feature is real at the moment of interaction. Transparency afterward is essential, but the action itself is the signal. For complex or technically risky features, especially AI, automation, or recommendation systems, Wizard of Oz prototyping allows teams to observe natural behavior before automation exists. Users interact with what looks like a fully functional system, while a human performs the work behind the scenes. This reveals expectations, decision making, and breakdowns that are invisible in abstract discussions. Concierge MVPs go one step further by making the human involvement explicit. Here, the value is delivered manually, often in a high touch way, to see whether users actually engage, return, and benefit. If people do not use or value the service when friction is low and quality is high, automation will not fix the underlying problem. Across all of these approaches, the principle is the same. Early feature evaluation should not ask people what they like. It should watch what they do when a real opportunity to engage is placed in front of them.
-
Testing prototypes at 90% done is like checking the weather after you've left home. Pointless! Yet I see it constantly. Teams build for months. Polish every pixel. Then, finally, ask users what they think. The feedback comes back devastating: "I don't understand what this does." "Why would I use this?" "This solves the wrong problem." But we're too invested to change course. Too much time. Too much money. Too much ego. So we ship it anyway. Add tooltips. Tweak copy. Pretend the core issue doesn't exist. After 15 years in UX, here's how to test early: 1️⃣ Test paper sketches first. Show 5 users rough concepts. Ask: "What problem does this solve?" If they can't tell you, kill it now. 2️⃣ Test the job, not the solution. Don't ask "Would you use this?" Ask "Show me how you currently do this." Their struggle is your design brief. 3️⃣ Test your riskiest assumption first. What would kill your entire concept? Test that before anything else. The best time to test was three months ago. The second best time is right now. Before you fall in love with your solution. Before you write a line of code. Before you even open Figma. Real research is uncomfortable. It kills ideas you love. But uncomfortable early beats catastrophic late. As you know, I'm always on the lookout for new tools. Recently been using Lyssna for rapid concept testing. Not for validating finished designs (𝘢𝘭𝘵𝘩𝘰𝘶𝘨𝘩 𝘺𝘰𝘶 𝘤𝘢𝘯 𝘥𝘰 𝘪𝘵 𝘸𝘪𝘵𝘩 𝘓𝘺𝘴𝘴𝘯𝘢, 𝘵𝘰𝘰). But, for killing bad ideas before they waste everyone's time. What stood out: → Test sketches, concepts, and polished designs → Get answers in hours, not after the sprint ends → Clear data that's hard to ignore Testing at 90% is theater. Testing at 10% is strategy. ✌️ P.S. What's the latest you've ever tested something that should have been tested way earlier? #LyssnaPartner
-
🚗 Imagine this: You launch a new car model after years of effort. Production is smooth, the assembly line is world-class… but six months later, the headlines scream “Massive Recall.” Billions lost. Reputation damaged. All because of a design flaw that was locked in during the product development phase. Takao Sakai once said: 👉 “95% of Toyota’s profits are determined in the product development phase, not production.” And it’s true across industries: In aerospace, material choices made at the design table decide 80% of lifecycle costs. In electronics, overengineering features adds cost but not value. In manufacturing, late design changes cause delays that no production efficiency can recover. ⚡ The real challenge? Most companies pour their energy into fixing problems on the shop floor instead of preventing them during development. 💡 The smarter way: Apply Design for Manufacturability (DFM) & Concurrent Engineering. Run early simulations & prototypes to detect risks. Involve quality, supply chain, and production teams at the concept stage. Use Voice of Customer (VOC) to cut out features no one wants but everyone pays for. The truth is simple: ✅ Every mistake caught in design costs a fraction of fixing it in production. ✅ Every smart decision in development compounds into long-term profit. 🔑 What’s one thing your team does during product development that safeguards future profitability? 👇 Share your experience—it might spark ideas for someone else! #Lean #ProductDevelopment #DesignThinking #Innovation #BusinessExcellence #Quality #TQM
-
I had a fascinating conversation with Steve Quinlan of NatWest Group recently, and it really highlighted a fundamental issue in how many product teams approach experimentation. Too often, "experimentation" is seen as something that happens after a feature is built. This is the cart-before-the-horse. You've already invested significant time and resources, and now you're hoping to validate if it was worth it. True experimentation should be about validating and developing ideas before they enter serious development and as they go through design. Steve sits with a 'prototyping' function at Natwest created with this purpose in mind. They focus on de-risking development by rigorously testing and iterating on ideas early in the process. This approach not only saves valuable resources but also ensures that the final product truly meets customer needs. Moreover, Steve's team's work disambiguates from the narrow view that experimentation is just about A/B testing. It's about a broader, more strategic approach to product research, discovery and validation. It begs the question: how many product teams are missing out on this critical early-stage validation? How often are we building features based on assumptions rather than solid evidence, even if they are 'tested' before release? Shifting our mindset to prioritize prototyping and early-stage experimentation can revolutionize how we build products and drive innovation. How does your team ensure that experimentation is integrated into the entire product development lifecycle, not just tacked on at the end? #experimentation #cro #productmanagement #growth #digitalexperience #experimentationledgrowth #elg
-
Concept testing unlocks better design ideas. When I work with teams, I often find they are eager to get feedback on a full user flow before they even know what parts of it people find interesting or easy to understand. In most cases, concept testing is more helpful at this stage. It forces teams to think more creatively by figuring out how different parts of a design work together. Once you’ve found what matters in the design, task analysis is great for fine-tuning the details. Think of it this way: concept testing is intentionally messy. It’s meant for exploring ideas. Task analysis is structured. It’s used to test and validate a direction. They both matter but are useful at different points in the process. We use concept testing to test many ideas or directions with an audience to see what works. Most of the time, with multivariate testing, there’s no need to worry about perfect order or flow. Continuous testing reveals patterns. → start with a basic idea or concept → get broad feedback on different versions or directions → try out different combinations or steps → use it to open up possibilities → learn what works and shape your direction We use task analysis to check whether a specific flow or task makes sense to people. → start with a clear task → have users go through it step by step → see if they can complete it easily → use it to confirm the design works as expected Concept testing helps you explore what’s possible. Task analysis helps you make sure it works. Use both, but know when to use which! #productdesign #productdiscovery #userresearch #uxresearch
-
By the time a design reaches prototyping, most of the product cost structure is already set. Material choices, design complexity and manufacturing methods defined in engineering determine the bulk of COGS. How those decisions are validated drives R&D cost (OPEX) - every prototype, test loop, and delay adds expense. Then there are the costs that rarely show up on a line item: concepts that never clear a gate, iterations that stall, projects quietly shelved. These hidden costs accumulate as wasted engineering hours, delayed launches, and margin erosion when compromises slip through. What we’re seeing in the market: companies serious about R&D cost reduction invest in decision quality. They enable engineers to explore and validate ideas earlier, so the wrong paths are closed quickly and cheaply. Simulation and AI can help by reducing the cost of being wrong. If a weak concept can be invalidated in minutes instead of months, the savings ripple through OPEX and COGS alike. If this is on your agenda, let’s swap notes. More info below.
-
We spent £60K to kill a £1M idea. A few years back, one of the UK’s biggest equipment hire companies brought us in. They’d just launched a shiny new loyalty programme: £1M invested in build and marketing. But there was a problem. It wasn’t picking up. When we asked what research had been done before launch, the answer was one word. None. So we ran £60K worth of focus groups to find out why. The feedback was brutal. Customers didn’t want another points scheme. One even said, “I already get van insurance cheaper.” That one line killed the idea on the spot, saving another million plus just in roll-out costs. Sometimes the value of research isn’t in what it tells you to do, it’s in what it tells you not to. And that lesson has stuck with me ever since. In business, we’re wired to launch, build, and scale, but the smartest moves often come from slowing down and asking first. That experience taught me the value of testing before you scale. The earlier you stress-test an idea, the cheaper it is to fix. Treat research as insurance against waste. £60K of insight looks cheap next to £1M of regret.
-
One of the most common misconceptions in early-stage startups is that if you build something technically extraordinary with a talented team, success will naturally follow. The reality is far more nuanced. Yes, building a complex product under tight resource constraints is challenging. The trade-offs alone can feel insurmountable. But the most critical—and often overlooked—challenge at this stage is constructing a feedback loop while the product is being developed. For engineers-turned-founders, this is especially dangerous. The instinct to focus solely on technical execution, what I call “engineering in the closet,” can doom even the most innovative startups. Without input from potential users or customers, you risk building a product that solves a problem no one has—or in a way no one values. The truth: 👉 Building doesn’t truly begin until the feedback loop is in place. 👉 Early validation ensures you’re creating the right solution, not just a technically impressive one. 👉 Regular feedback forces you to align your product with real-world needs—long before it’s too late. A practical approach: Create a simple demo to gather feedback early. This doesn’t require a fully functioning product—mocked or simulated backends are perfectly fine. A demo not only highlights your value proposition and product experience but also compels you to practice articulating its benefits. These early iterations are invaluable. They help you refine your direction, strengthen your messaging, and ensure that your efforts are aligned with real demand. Founder-led sales are critical through the seed stage, and this process builds the muscle of selling early and often. By the time the product is ready for market, founders will already have a head start, both in refining the pitch and in building relationships that can drive adoption. #Startups #EngineeringLeadership #ProductDevelopment #FounderInsights
-
One of the most common questions I hear in early-stage development is: Which scenario is the right one? How can I weight the constraints to obtain feasable scenarios beyond paper? This is the beauty of navigating the uncertainty funnel to reach to the narrow part of it as fast as possible. This is the knowledge to know what on paper makes sense in reality... from early phases. In this picture from my early days at Siemens Energy, I learned fast that what paper/computer indicates and reality don't always coincide if modelling is not accurate enough or data is not from the right source or method. Being on that turbine nacelle analyzing loads sensors (strain gauges) close to the tower top while the whole thing moved non-stop, made me see fatigue loads in tower top completely different back in the office :) Early-stage work is full of open variables. Wind resource, constraints, layout strategies, grid access. None of them are fixed yet, and that’s precisely where the opportunity lies. Comparing scenarios early allows teams to explore those variables instead of locking into assumptions too soon. When different options are placed side by side and sensitivity studies can be made seamlessly, the conversation changes. Trade-offs become visible. Risks are easier to explain. And decisions move away from gut feeling toward informed judgment. This way of working is increasingly reflected in research and best practices across the sector. Institutions like Fraunhofer-Institut für Windenergiesysteme consistently highlight the value of scenario-based analysis to understand complexity and support robust design choices early on. What often goes unnoticed is how much this early work pays off later. Projects that explore alternatives from the start tend to move through advanced phases with greater confidence. Not because challenges disappear, but because expectations are already aligned. Strong projects are not defined by how quickly a decision is made, but by how well it holds up over time, how well they can prove to move from 2D (paper) to 3D (reality). In a sector where conditions constantly evolve, the ability to compare, adapt and choose with clarity may be one of the most underrated skills we have. ⚡💨 #WindEnergy #ProjectDevelopment #EarlyStage #Engineering #RenewableEnergy #Leadership #Youwind #ClarityInMotion
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development