Thought provoking and great conversation between Aravind Srinivas (Founder, Perplexity) and Ali Ghodsi (CEO, Databricks) today Perplexity Business Fellowship session sometime back offering deep insights into the practical realities and challenges of AI adoption in enterprises. TL;DR: 1. Reliability is crucial but challenging: Enterprises demand consistent, predictable results. Despite impressive model advancements, ensuring reliable outcomes at scale remains a significant hurdle. 2. Semantic ambiguity in enterprise Data: Ali pointed out that understanding enterprise data—often riddled with ambiguous terms (C meaning calcutta or california etc.)—is a substantial ongoing challenge, necessitating extensive human oversight to resolve. 3. Synthetic data & customized benchmarks: Given limited proprietary data, using synthetic data generation and custom benchmarks to enhance AI reliability is key. Yet, creating these benchmarks accurately remains complex and resource-intensive. 4. Strategic AI limitations: Ali expressed skepticism about AI’s current capability to automate high-level strategic tasks like CEO decision-making due to their complexity and nuanced human judgment required. 5. Incremental productivity, not fundamental transformation: AI significantly enhances productivity in straightforward tasks (HR, sales, finance) but struggles to transform complex, collaborative activities such as aligning product strategies and managing roadmap priorities. 6. Model fatigue and inference-time compute: Despite rapid model improvements, Ali highlighted the phenomenon of "model fatigue," where incremental model updates are becoming less impactful in perception, despite real underlying progress. 7. Human-centric coordination still essential: Even at Databricks, AI hasn’t yet addressed core challenges around human collaboration, politics, and organizational alignment. Human intuition, consensus-building, and negotiation remain central. Overall the key challenges for enterprises as highlighted by Ali are: - Quality and reliability of data - Evals- yardsticks where we can determine the system is working well. We still need best evals. - Extreme high quality data is a challenge (in that domain for that specific use case)- Synthetic data + evals are key. The path forward with AI is filled with potential—but clearly, it's still a journey with many practical challenges to navigate.
Challenges When Adopting New AI Frameworks
Explore top LinkedIn content from expert professionals.
Summary
Adopting new AI frameworks means navigating complex tools and methods used to build artificial intelligence systems, which can be challenging for businesses due to issues like unclear data, unpredictable costs, and evolving legal risks. These challenges can slow down AI adoption and make it harder to achieve trustworthy results and efficient workflows.
- Focus on data quality: Ensure your data is clean, organized, and relevant, as poor data can cause unreliable AI outcomes and increase workload for everyone involved.
- Prepare for shifting costs: Plan for unexpected expenses in training, maintenance, and updating your AI systems as these costs can fluctuate and surprise even the most careful teams.
- Clarify legal and compliance risks: Work closely with legal experts to understand and address privacy, bias, and regulatory questions before launching new AI solutions.
-
-
🚩 Up to 50% of #RPA projects fail (EY) 🚩 Generative AI suffers from pilotitis (endless AI experiments, zero implementation) 𝐃𝐈𝐓𝐂𝐇 𝐓𝐄𝐂𝐇𝐍𝐎𝐋𝐎𝐆𝐈𝐂𝐀𝐋 𝐍𝐎𝐒𝐓𝐀𝐋𝐆𝐈𝐀 𝐘𝐨𝐮𝐫 𝐑𝐏𝐀 𝐩𝐥𝐚𝐲𝐛𝐨𝐨𝐤 𝐢𝐬 𝐧𝐨𝐭 𝐞𝐧𝐨𝐮𝐠𝐡 𝐟𝐨𝐫 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐯𝐞 𝐀𝐈 In the race to adopt #GenerativeAI, too many enterprises are stumbling at the starting line, weighed down by the comfortable familiarity of their #RPA strategies. It's time to face an uncomfortable truth: 𝐲𝐨𝐮𝐫 𝐩𝐚𝐬𝐭 𝐚𝐮𝐭𝐨𝐦𝐚𝐭𝐢𝐨𝐧 𝐬𝐮𝐜𝐜𝐞𝐬𝐬𝐞𝐬 𝐦𝐢𝐠𝐡𝐭 𝐛𝐞 𝐲𝐨𝐮𝐫 𝐛𝐢𝐠𝐠𝐞𝐬𝐭 𝐨𝐛𝐬𝐭𝐚𝐜𝐥𝐞 𝐭𝐨 𝐀𝐈 𝐢𝐧𝐧𝐨𝐯𝐚𝐭𝐢𝐨𝐧. There is a difference: 1. 𝐑𝐎𝐈 𝐅𝐨𝐜𝐮𝐬 𝐈𝐬𝐧'𝐭 𝐄𝐧𝐨𝐮𝐠𝐡 AI's potential goes beyond traditional ROI metrics. How do you measure the value of a technology that can innovate, create, and yes, occasionally hallucinate? 2. 𝐇𝐢𝐝𝐝𝐞𝐧 𝐂𝐨𝐬𝐭𝐬 𝐖𝐢𝐥𝐥 𝐁𝐥𝐢𝐧𝐝𝐬𝐢𝐝𝐞 𝐘𝐨𝐮 Forget predictable RPA costs. AI's hidden expenses in change management, data preparation, and ongoing training will be a surprise and can be non-linear. 3. 𝐃𝐚𝐭𝐚 𝐑𝐞𝐚𝐝𝐢𝐧𝐞𝐬𝐬 𝐈𝐬 𝐌𝐚𝐤𝐞-𝐨𝐫-𝐁𝐫𝐞𝐚𝐤 Unlike RPA's structured data needs, AI thrives on diverse, high-quality data. Many companies need complete data overhauls. Is your data truly AI-ready, or are you feeding a sophisticated hallucination machine? 4. 𝐎𝐩𝐞𝐫𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐂𝐨𝐬𝐭𝐬 𝐀𝐫𝐞 𝐚 𝐌𝐨𝐯𝐢𝐧𝐠 𝐓𝐚𝐫𝐠𝐞𝐭 AI's operational costs can wildly fluctuate. Can your budget handle this uncertainty, especially when you might be paying for both brilliant insights and complete fabrications? 5. 𝐏𝐫𝐨𝐛𝐥𝐞𝐦 𝐂𝐨𝐦𝐩𝐥𝐞𝐱𝐢𝐭𝐲 𝐈𝐬 𝐨𝐧 𝐀𝐧𝐨𝐭𝐡𝐞𝐫 𝐋𝐞𝐯𝐞𝐥 RPA handles structured, rule-based processes. AI tackles complex, unstructured problems requiring reasoning and creativity. Are your use cases truly leveraging AI's potential? 6. 𝐎𝐮𝐭𝐩𝐮𝐭𝐬 𝐜𝐚𝐧 𝐛𝐞 𝐔𝐧𝐩𝐫𝐞𝐝𝐢𝐜𝐭𝐚𝐛𝐥𝐞 RPA gives consistent outputs. AI can surprise you – sometimes brilliantly, sometimes disastrously. How will you manage this unpredictability in critical business processes? 7. 𝐄𝐭𝐡𝐢𝐜𝐚𝐥 𝐌𝐢𝐧𝐞𝐟𝐢𝐞𝐥𝐝 𝐀𝐡𝐞𝐚𝐝 RPA had minimal ethical concerns. AI brings significant challenges in bias, privacy, and decision-making transparency. Is your ethical framework robust enough for AI? 8. 𝐒𝐤𝐢𝐥𝐥 𝐆𝐚𝐩 𝐈𝐬 𝐚𝐧 𝐀𝐛𝐲𝐬𝐬 AI requires skills far beyond RPA expertise – data science, machine learning, domain knowledge, and the crucial ability to distinguish AI fact from fiction. Where will you find this talent? 9. 𝐑𝐞𝐠𝐮𝐥𝐚𝐭𝐨𝐫𝐲 𝐋𝐚𝐧𝐝𝐬𝐜𝐚𝐩𝐞 𝐈𝐬 𝐒𝐡𝐢𝐟𝐭𝐢𝐧𝐠 Unlike RPA, AI faces increasing regulatory scrutiny. Are you prepared for the evolving legal and compliance challenges of AI deployment? Treating #AI like #intelligentautomation, in learning about it and in its implementation is a path devoid of success. It's time to rewrite the playbook and move beyond the comfort of 'automation COE leadership'. #AIleadership
-
One challenge we're seeing more as enterprises adopt AI: navigating the legal diligence process. I'm curious if other enterprise AI vendors have encountered this. With AI evolving so fast, many enterprise legal departments are still figuring out how to evaluate risks — especially when it comes to data usage, bias, and model behavior. It’s not due to a lack of care. The reality is that the frameworks and language to assess these risks are still catching up. Some examples we’ve seen: We often receive detailed diligence questionnaires from prospective customers asking how we “train our models,” even though we don’t build foundational models — only a handful of companies do. That misunderstanding alone can lead to weeks of clarification. We’ve also been asked to prove our AI doesn't introduce bias — even though our use cases involve software deployment, not decisions like lending or hiring. Legal teams don’t always have the tools to differentiate those contexts, and understandably so — it's new territory for everyone. The core issue isn’t resistance — it’s a knowledge gap. Without clarity on the actual risks, many teams default to asking what they can, even if it’s not fully aligned with the use case. Getting the tech right is only half the battle. Educating customers and ensuring everyone is up to speed on the legal, security, and compliance landscape is just as critical.
-
Holi, Colors & AI Conversations: The Real Challenges in Model Customization 🎨🤖 After an amazing Holi celebration in our society, filled with vibrant colors and laughter, I caught up with a few friends from the industry in the afternoon. What started as a casual discussion quickly turned into an insightful conversation about AI/ML model customization and the biggest challenge—Data Integrity. One of the key questions that came up was: "Mihir, what do you think is the biggest roadblock in AI model customization?" As I unpacked the topic, we identified seven major challenges that organizations face when customizing AI models: 1️⃣ Data Privacy & Security 🔐 AI thrives on data, but how do we ensure privacy, security, and compliance with regulations (GDPR, CCPA) while still leveraging data effectively? Striking this balance remains a tough challenge. 2️⃣ Data Quality & Preparation 📊 AI models are only as good as the data they learn from. Inconsistent, biased, or poor-quality data can lead to unreliable results, making data cleansing and preprocessing non-negotiable. 3️⃣ Measuring Real Impact ("As-Is" vs. AI-Driven) 📈 How do we objectively measure AI’s success? Comparing AI-powered decisions with existing processes helps assess whether the model is truly adding value or just making things more complex. 4️⃣ Developer Talent & Skills in Generative AI 🧑💻 AI is evolving rapidly, but do we have enough skilled engineers who can bridge the gap between technical AI models and business impact? The talent shortage in this space is real. 5️⃣ Access to Real-Time Data ⏳ While historical data is important, real-time insights drive better decisions. The challenge is integrating and processing real-time data efficiently for AI models to generate accurate, dynamic outputs. 6️⃣ Handling Diverse Data Structures 🔄 AI models don’t just work with clean, structured databases. They need to interpret text, images, videos, voice, sensor data, and more. Managing this complexity without losing context is a constant challenge. 7️⃣ Keeping Up with Rapid Model Changes ⚡ AI models are not static—they evolve. Continuous learning, retraining, and adapting to new data patterns require robust pipelines, automation, and governance, which many companies struggle to implement effectively. By the end of the discussion, one thing was clear: AI/ML customization is not just about building models—it’s about integrating them into a trusted, scalable, and high-impact ecosystem. Would love to hear from my network—which of these challenges resonate with you the most? How are you addressing them? Let’s keep the conversation going! 🚀 #AI #MachineLearning #DataIntegrity #GenAI #ModelCustomization #HoliVibes #TechTalks #DataQuality #AIChallenges
-
We might be slowly recreating the Java Enterprise Edition era… but this time for AI. A lot of the tooling in the AI space right now feels like it’s being built faster than it’s being understood. Somewhere along the way: A simple API call became an MCP server. A structured function call became tool orchestration. A while loop became an agent runtime SDK. A prompt became memory with vector stores, retrievers and planners. Now we are stuffing 20K to 50K tokens of instructions, tool schemas, role definitions, chat history, scratchpads, planner outputs etc into the context window… just to fetch the weather or hit an internal API. This is context pollution. Every extra token you add competes for attention inside the model. More context does not just cost more. It often performs worse. Important instructions get diluted by framework generated prompts, tool descriptions and latent scaffolding that you did not explicitly design. If your agent just needs to: call a DB hit an API retry until the output is valid That is literally: while (not_done) { call_model() maybe_call_function() } Modern models already support structured function calling. You can deterministically bind tools without inflating prompts or wrapping everything inside an orchestration layer. But instead, we now have SDKs that abstract: the loop the memory the planner the executor the router the retriever the evaluator Often to implement something that could have been written in 40 to 60 lines of code. Not all complexity is bad. But accidental complexity in something as probabilistic as LLM inference becomes architectural debt very quickly: Higher latency Higher token costs Harder debugging More non determinism Reduced controllability Sometimes the most scalable agent framework is: a clean prompt a strict tool schema a small retry loop and good observability As AI moves from demos to infra, the leadership challenge is not adopting the newest framework. It is recognizing when not to. Elegance here will increasingly look like: less prompt stuffing fewer layers explicit control smaller contexts and boring predictable loops...
-
💡Challenges in Designing AI Systems With new AI tools launching almost daily, one thing is becoming clear: many of them are poorly designed. Despite impressive capabilities, they are often not aligned with real user problems, and, as a result, lack clarity and trust. This makes them hard to adopt or scale. Tia Clement created a nice diagram that maps the key challenges of designing AI systems across each stage of the Double Diamond process (https://lnkd.in/dizTq7Vy). The Double Diamond framework is know for its ability to help teams move from exploring the right problem to delivering the right solution. What makes this framework especially useful for AI products is that it doesn't just highlight the challenges — it aligns them with the actual phases of the design process. This makes it much easier to understand what issues to anticipate, what questions to ask, and what capabilities and constraints to plan for when building AI systems. 🔷 Discover (Explore the Problem) Designers are trying to understand the context and user needs, but AI introduces unique challenges: ✔ Unclear boundaries of AI capabilities: It's hard to define what AI can and cannot do. ✔ Data dependency: Whether something is technically feasible depends heavily on data availability and its quality. ✔ Lack of purposeful AI use: Teams often struggle to define why AI is needed in a product in the first place. ✔ Difficulty sketching divergent AI solutions: Traditional ideation tools don't translate well to speculative AI behaviors. 🔷 Define (Narrow Down the Problem) This phase focuses on synthesizing findings into a clear problem statement or design brief: ✔ Fast prototyping is hard: It's difficult to simulate or quickly prototype AI behaviors because it requires building a robust system. ✔ Unclear outcomes: Predicting the potential consequences of deploying AI systems is also challenging. 🔷 Develop (Explore Possible Solutions) In this phase, ideas are generated and tested: ✔ Fuzzy, open-ended interaction design: AI doesn't always follow fixed rules, which complicates UX. ✔ Explainability: It's hard to communicate the outcome generated by AI (what AI is doing and why). ✔ Communicating evolution: Users struggle to understand how AI systems change or improve over time. 🔷 Deliver (Narrow Down to the Final Solution) AI system is refined and launched but with unique concerns: ✔ Unpredictability: AI behavior can be unexpected or inconsistent, making testing and release risky. ✔ Creepiness / Uncanny Valley: Users may feel discomfort when AI systems seem too unnatural. ✔ Accountability: It's unclear who is responsible when AI makes mistakes—designers, developers, or AI system? #AI #design #UX #uxdesign
-
Good paper on AI agent frameworks In this paper, we conduct the first empirical study of LLM-based agent frameworks, exploring real-world experiences of developers in building AI agents. Specifically, we collect and analyze 1,575 LLM-based agent projects on GitHub along with 8,710 related developer discussions. Based on this, we identify ten representative frameworks and examine the functions they fulfill in the development process, how developers use them, and their popularity trends. • Finding 1: Functional Roles. The ten LLM-based agent frameworks serve functional roles across four main categories: basic orchestration, multi-agent collaboration, data processing, and experimental exploration. They are applied across ten distinct domains, including software development. • Finding 2: Multi-Framework Adoption. 96% of top-starred projects adopt multiple frameworks, indicating that a single-framework solution is typically insufficient to meet the complex demands of real-world agent applications. • Finding 3: Popularity vs. Adoption. There is a significant gap between a framework's community popularity (like GitHub stars) and its actual real-world adoption. Developers should prioritize ecosystem maturity and maintenance activity over short-term popularity indicators when selecting a framework. • Finding 4: Challenge Taxonomy. The challenges encountered by developers are multifaceted and can be categorized into 4 domains and 9 distinct subcategories across the software development lifecycle. • Finding 5: Internal Logic Failures. Over one-third of failures are caused by deficiencies in internal logic control. Specifically, task termination issues account for 21.63% and message cooling issues account for 9.86% of failures. • Finding 6: Tool Integration Challenges. API integration and connecting to third-party services represent major hurdles. 25.61% of reported issues involve API limitations, permission errors, and missing dynamic libraries.
-
Several CISOs have described to me the Catch-22 they're facing with AI and Agents. The board is demanding an AI strategy, forcing a choice between two bad outcomes given the promise and risks of this new technology: -> Delay adoption: Fall behind competitors as your security and governance gaps widen. -> Accelerate adoption: Accept unknown risks from an entirely new attack surface. Lampis Alevizos, Ph.D. describes this difficult position in his brilliant paper, “The AI Security Zugzwang," a German word for a scenario in chess where every possible move worsens your position. Alevizos’s paper identifies three core forces driving this dilemma: a growing capability gap, the accelerating nature of risk, and a constantly shifting regulatory landscape. Understanding which type of zugzwang you're in is the first step toward building a playbook to improve your position. Alevizos breaks them down into four distinct categories: 1. Adoption Zugzwang: The market is forcing your hand to adopt AI and agents to stay competitive. 2. Implementation Zugzwang: Security controls are limiting the usefulness of the AI's functionality. 3. Operational Zugzwang: The daily grind of managing a live, evolving system like AI requiring rapid, untested patching and where fixing one security issue creates another. 4. Governance Zugzwang: You're forced to make rules and decisions in a compliance vacuum and without standardized risk frameworks. In my latest post, I review Alevizos’s framework to help CISO’s diagnose their specific situation and shift the conversation from a risk matrix to a more nuanced, strategic discussion with the business. How do you think security teams can best adjust to the zugwzang they're encountering around AI? Link to post in comments!
-
I had an interesting discussion last night at a community meetup about Agentforce and AI. Somebody said, “Well, Gen Z will drive adoption — everyone’s using AI anyway.” That’s when I tried to challenge the room. First — not everyone uses AI. We live in a bubble. The Salesforce ecosystem is filled with people who are young, tech-comfortable, and surrounded by innovation. For us, AI is normal. We discuss prompts, copilots, and flows like it’s second nature. But step outside the ecosystem bubble, and the reality is very different. In one of my current projects, we’re rolling out Salesforce globally for both Field Service and Customer Service. Field engineers are getting iPads for the first time. Before this, their daily tools were… Excel spreadsheets. When we launched, we discovered that many of them actually needed support just using the iPads — not Salesforce, not AI — but the devices themselves. That says a lot. So, when I got home, I decided to do some quick research. In the UK, around one-third of the workforce is aged 50 or older — and this group is growing. The younger demographic (16–17) is actually shrinking in participation. And that matters, because these experienced professionals — the ones who know the business best — are not typically the people using ChatGPT or exploring AI tools for fun. They don’t live in the same digital comfort zone we do. So while Salesforce and others are pushing Agentforce and the “AI-first” future, there’s a huge portion of the workforce out there who won’t naturally follow along. They’ll need time, context, and training to understand what this means for them — and how to use it effectively. That’s why I keep hammering on about fundamentals, user readiness, and proper onboarding. Because if we don’t bridge that gap, we’ll end up with “AI-powered” systems that users can’t (or won’t) use. We might see another wave of half-adopted rollouts — only this time, with an AI badge on top. So my two challenges are: 1. knowing that next week at Dreamforce, all will be about AI again, how can we as an ecosystem safe guard those organisations and users that are simply not ready. 2. how can we make sure that we as an ecosystem actually connect with our users and organisation to make sure that we actually bring them solutions they need and help explain everything in clear and plain language. Would love to hear people's thoughts.
-
LLMs are shepherding in a new era of AI, no doubt about it. And while the volume and velocity of innovation is astounding, I feel that we are forgetting the importance of the quality of the data that powers this. There is definitely a lot of talk on what data is used to train the massive LLMs such as OpenAI, and there is a lot of talk on leveraging your own data through finetuning and RAG. I also see an increased attention on ops, whether it is LLMOps, MLOps or DataOps, all of which is great to keeping your system and data running. What I seeing far less attention to is managing your data, ensuring it is of high quality and that it is available when and where you need it. We all know about garbage in garbage out -- if you do not give your system good data, you will not get good results. I believe that this new era of AI means that data engineering and data infrastructure will become key. There are numerous challenges to get your system into production from a data perspective. Here are some key areas that I have seen causing challenges: 1. Data: The data used in development is often not representative of what is seen in production. This means the data cleaning and transforms may miss important aspects of production data. This in turn degrades the model performance as they were not trained and tested appropriately. Often new data sources are introduced in development that may not be available in production and they need to be identified early. 2. Pipelines: Moving our data/ETL pipelines from development to staging to production environments. Either the environment (libraries, versions, tools) have incompatibilities or the functions written in development were not tested in the other environments. This means broken pipelines or functions that need rewriting. 3. Scaling: Although your pipelines and systems worked fine in development, even with some stress testing, once you get to the production environment and do integration testing, you realize that the system is not scaling the way you expected and are not meeting the SLAs. This is true even for offline pipelines. Having the right infrastructure, platforms and teams in place to facilitate rapid innovation with seamless lifting to production is key to stay competitive. This is the one thing I see again and again being a large risk factor for many companies. What do you all think? Are there other key areas you believe are crucial to pay attention to in order to achieve efficient ways to get LLM and ML innovations into production?
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development