Business Trends

Explore top LinkedIn content from expert professionals.

  • View profile for Tony Seale

    The Knowledge Graph Guy

    40,930 followers

    Over two years ago, I wrote about the emerging synergy between LLMs and ontologies - and how, together, they could create a self-reinforcing loop of continuous improvement. That post struck a chord. With GPT-5 now here, it’s the right moment to revisit the idea. Back then, GPT-3.5 and GPT-4 could draft ontology structures, but there were limits in context, reasoning, and abstraction. With GPT-5 (and other frontier models), that’s changing: 🔹 Larger context windows let entire ontologies sit in working memory at once.   🔹 Test-time compute enables better abstraction of concepts.   🔹 Multimodal input can turn diagrams, tables, and videos into structured ontology scaffolds.   🔹 Tool use allows ontologies to be validated, aligned, and extended in one flow. But some fundamentals remain. GPT-5 is still curve-fitting to a training set - and that brings limits: 🔹 The flipside of flexibility is hallucination. OpenAI has reduced it, but GPT-5 still scores 0.55 on SimpleQA, with a 5% hallucination rate on its own public-question dataset.   🔹 The model is bound by the landscape of its training data. That landscape is vast, but it excludes your private, proprietary data - and increasingly, an organisation’s edge will track directly to the data it owns outside that distribution. Fortunately, the benefits flow both ways. LLMs can help build ontologies, but ontologies and knowledge graphs can also help improve LLMs. The two systems can work in tandem.   Ontologies bring structure, consistency, and domain-specific context.   LLMs bring adaptability, speed, and pattern recognition that ontologies can’t achieve in isolation.   Each offsets the other’s weaknesses - and together they make both stronger. The feedback loop is no longer theory - we’ve been proving it:   Better LLM → Better Ontology → Better LLM - in your domain. There is a lot of hype around AI. GPT-5 is good, but not ground-breaking. Still, the progress over two years is remarkable. For the foreseeable future, we are living in a world where models keep improving - but where we must pair classic formal symbolic systems with these new probabilistic models. For organisations, the challenge is to match growing model power with equally strong growth in the power of their proprietary symbolic formalisation. Not all formalisations are equal. We want fewer brittle IF statements buried in application code, and more rich, flexible abstractions embedded in the data itself. That’s what ontologies and knowledge graphs promise to deliver. Two years ago, this was a hopeful idea.   Today, it’s looking less like a nice-to-have…   …and more like the only sensible way forward for organisations. ⭕ Neural-Symbolic Loop: https://lnkd.in/eJ7S22hF 🔗 Turn your data into a competitive edge: https://lnkd.in/eDd-5hpV

  • View profile for Laura Wyer

    Founder, CORACREI | FMCG Recruitment Partner | Building Sales, Marketing & Commercial Teams across Ireland & the UK

    19,528 followers

    As a recruiter, I’ve never seen a market like 2026. Plenty of roles exist but the rules of getting them have changed. Here's my honest read as a recruiter working across Ireland and the UK on what's really going on right now…. 🇮🇪 Ireland Grand news first! Ireland’s job market is holding up better than almost anywhere else in Europe with hiring is down by roughly ~7% year on year, one of the mildest declines in Europe - comparable to the Netherlands and much softer than the Eurozone average (over 10%). Unemployment sits at just under 5%. The catch? Roughly 66% of Irish employers say they’re struggling to fill roles due to skills shortages (according to 2026 surveys). There’s a real skills gap in Sales, Supply Chain, Digital, and Category. Which means if you’re the right person, there is still a seat at the table. 🇬🇧 England / UK Tougher picture here. Unemployment is creeping over 5%, and permanent hires have been falling for over 3 years straight. The market’s starting to stabilise after a rough stretch. Vacancies are down, but not falling off a cliff. Candidate availability is way up, so every role feels like a squeeze - competition’s biting harder! Some sectors are crawling. Others are crying out for people who can actually deliver and see things through. It’s less about volume now, more about very specific profiles. 🌍 Global No single stat sums it up, but the big picture shows a sluggish hiring environment. Skills shortages are everywhere, AI is on the rise, and bosses are picking skills over degrees. Hiring across EMEA is down by around 10-12% on average. France and Italy both down by more than 14%. Same story in most places: fewer roles, more people, a lot of noise. And recruiter inboxes? Full of the same-y applications. Which makes clear, evidence-based stories stand out even more. So what’s actually going on underneath all of this? The market isn’t overflowing. Experienced, visible talent is still getting snapped up. But the easy wins are gone with employers being choosier now. The difference I'm seeing between people who are moving and people who are stuck: → They lead with outcomes, not job titles. Recruiters know you had responsibilities. Tell us the impact: clients won, revenue grown, growth delivered. → They network and get referrals. This beats blind applications 9 times out of 10. Especially in Ireland - this market still runs on relationships and a warm conversation. → They're specific about what they want. You don’t need shiny to stand out, you need clear. Vague applications get vague results. → They show up as themselves. Genuinely. Share insights not just your CV. Recruiters LOVE that. The people getting hired right now aren’t always the most experienced in the room. They’re the easiest to understand. And in this market, if it’s not obvious why you’re a yes…you’ll be a no.

  • View profile for Mark Hopkins

    21 Years Recruiting Engineers. SME Manufacturing Focus. Engineers, Senior Roles. Storage & Technical Sales. Ex-Aircraft Engineer. Podcaster & Ranter.

    14,559 followers

    A thought from listening to James Reed on LBC last night: He mentioned that the recent decline in the recruitment market might be linked to the National Insurance increase introduced earlier this year. It got me thinking. Because while NIC changes absolutely play a role, it’s worth zooming out and looking at the bigger picture. - We’ve had 37 consecutive months of job market decline (up to May) - There’s been a 17% year-on-year drop in job vacancies - GDP has been flat for several quarters And the NIC rise? That was implemented two months ago. So what explains the other 36 months? From my perspective working in recruitment every day, it’s clear the slowdown is being shaped by a combination of long-running, overlapping pressures: - Brexit – increased complexity in trade and talent movement - COVID – ongoing operational and financial impacts for many businesses - Energy costs – a major factor for manufacturers and SMEs - Cost of living – influencing both salary demands and consumer spending - General tax pressures – including Corporation Tax, thresholds, and payroll costs - Global market uncertainty – creating a cautious approach to planning - Technology shifts – rapid adoption of AI and automation altering hiring needs Together, these aren’t one-off shocks—they’re part of a longer trend. It’s not a single factor. It’s cumulative. A slow erosion of hiring confidence. And now we’re seeing something unusual: - GDP is slightly up, but job vacancies are falling. That historically hasn’t been the case. Which suggests two things: 1. Businesses are doing more with less – growth without hiring 2. Many are in a holding pattern – delaying decisions, running lean, watching costs Cautious hiring. Measured investment. Play it safe. It’s easy to look at one policy or event and link it to market shifts. But when you work at the coalface, it’s clear - This has been building for years, and we’re likely in the middle of a wider transition in how UK businesses grow, hire, and invest. If you want my final thoughts on this… This tailwind into the unknown is the new normal. We’re not just in a market dip, we’re in a transition. A new epoch for HMS Great Britain, and the waters ahead look choppy for some time to come. And not to get political, but changing the captain won’t change the weather. What we need now isn’t just leadership. We need a new course, a clear direction, and above all else, we need to rebuild hope.

  • View profile for Rémi Guyot

    Fondateur AI Discipline | Former les équipes produit à l’IA

    23,241 followers

    Clayton Christensen announced it — product managers are underestimating the disruption caused by Large Language Models (LLMs) for the reasons described in The Innovator's Dilemma. Incumbent organizations often focus on what new technologies CANNOT do, highlighting their limitations and risks instead of embracing the low-cost and scalability benefits that are emerging. Every profession has an implicit Return On Investment (ROI). If you're rejecting LLMs because they can only accomplish tasks with 80% quality, you're missing the point. A machine that can accomplish 80% of a task (= return) with merely 1% of the effort (= investment) offers a much much much better ROI than a human everything manually. Adding to this, there exists an absurd subconscious belief among some product managers that their lack of adoption will somehow slow down the inevitable tsunami of disruption. Combined with natural organizational inertia, this mindset results in a profession that clings to internal debates—such as the distinction between a product manager and a product owner—when it should be focusing on learning how to surf this lava-wave. Product managers should be obsessed with: 1. Breaking down their jobs into huge lists of tiny tasks; 2. Exploring how each task could be done slightly more rapidly thanks to LLMs; 3. Figuring out what new investments or habits need to happen to accelerate the tango — starting by abandoning ChatGPT and hopping onto LLMs that tap into private databases, your most important asset moving forward. Here's the beautiful part: LLMs are an amazing piece of technology, but the actual products remain to be invented on top of it. What's holding you back?

  • View profile for Ross Dawson
    Ross Dawson Ross Dawson is an Influencer

    Futurist | Board advisor | Global keynote speaker | Founder: AHT Group - Informivity - Bondi Innovation | Humans + AI Leader | Bestselling author | Podcaster | LinkedIn Top Voice

    35,593 followers

    LLMs massively empower individuals. Used well, they augment thinking and intentions to an extraordinary degree. The impact is far more muted and delayed for large organizations, which have entrenched ways of working that will take years to shift through careful negotiation of culture and governance. AI doyen Andrej Karpathy has neatly laid out how genAI results, quite simply, in: Power to the people. Transformative technologies have usually been developed and used by governments and the military, and then diffused to companies and individuals. For LLMs, everyone has access to the same quality AI, largely free, in every language, to be applied immediately to whatever users want to do. In contrast, there are many reasons why it will be far slower for organizations to get value: ➡️ LLMs offer broad but shallow capabilities, which are less valuable to organizations already equipped with deep domain experts. ➡️ Organizations already consolidate specialized expertise, so LLMs typically enhance existing workflows rather than enabling entirely new capabilities. ➡️ The improvements LLMs provide are incremental, making organizations slightly more efficient at tasks they already perform well. ➡️ Integrating LLMs into complex legacy systems and existing processes is technically challenging and resource-intensive. ➡️ Strict security, privacy, and regulatory requirements limit how freely LLMs can be used in corporate and government environments. ➡️ The risk of errors or hallucinations from LLMs is unacceptable in high-stakes or legally sensitive organizational contexts. ➡️ Organizational culture can resist the adoption of new tools, especially when they disrupt established roles or processes. ➡️ Decision-making in large organizations is often slow, with multiple layers of approval and governance slowing experimentation. ➡️ Retraining employees to use LLMs effectively at scale is a significant undertaking with cost and coordination challenges. ➡️ Bureaucracy, turf wars, and political dynamics within organizations often create resistance to rapid technological adoption. Take advantage of power flowing to the people!

  • View profile for NIKHIL NAN

    Global Procurement Strategy, Analytics & Transformation Leader | Cost, Risk & Supplier Intelligence at Enterprise Scale | Data & AI | MBA (IIM U) | MS (Purdue) | MSc AI & ML (LJMU, IIIT B)

    7,937 followers

    Large language models (LLMs) can improve their performance not just by retraining but by continuously evolving their understanding through context, as shown by the Agentic Context Engineering (ACE) framework. Consider a procurement team using an AI assistant to manage supplier evaluations. Instead of repeatedly inputting the same guidelines or losing specific insights, ACE helps the AI remember and refine past supplier performance metrics, negotiation strategies, and risk factors over time. This evolving “context playbook” allows the AI to provide more accurate supplier recommendations, anticipate potential disruptions, and adapt procurement strategies dynamically. In supply chain planning, ACE enables the AI to accumulate domain-specific rules about inventory policies, lead times, and demand patterns, improving forecast accuracy and decision-making as new data and insights become available. This approach results in up to 17% higher accuracy in agent tasks and reduces adaptation costs and time by more than 80%. It also supports self-improvement through feedback like execution outcomes or supply chain KPIs, without requiring labeled data. By modularizing the process—generating suggestions, reflecting on results, and curating updates—ACE builds robust, scalable AI tools that continuously learn and adapt to complex business environments. #AI #SupplyChain #Procurement #LLM #ContextEngineering #BusinessIntelligence

  • View profile for Rudina Seseri
    Rudina Seseri Rudina Seseri is an Influencer

    Venture Capital | Technology | Board Director

    20,284 followers

    For years, fine-tuning LLMs has required large amounts of data and human oversight. Small improvements can disrupt existing systems, requiring humans to go through and flag errors in order to fit the model to pre-existing workflows. This might work for smaller use cases, but it is clearly unsustainable at scale. However, recent research suggests that everything may be about to change. I have been particularly excited about two papers from Anthropic and Massachusetts Institute of Technology, which propose new methods that enable LLMs to reflect on their own outputs and refine performance without waiting for humans. Instead of passively waiting for correction, these models create an internal feedback loop, learning from their own reasoning in a way that could match, or even exceed, traditional supervised training in certain tasks. If these approaches mature, they could fundamentally reshape enterprise AI adoption. From chatbots that continually adjust their tone to better serve customers to research assistants that independently refine complex analyses, the potential applications are vast. In today’s AI Atlas, I explore how these breakthroughs work, where they could make the most immediate impact, and what limitations we still need to overcome.

  • View profile for Mark Kelly

    CCO at Alldus & Founder of AI Ireland | Helping organisations scale through AI Strategy | Author of ‘AI Unleashed’ | AI Keynote Speaker

    36,061 followers

    Some Irish companies are quietly building AI agents that would make Silicon Valley jealous. Reviewing this year's AI Ireland Awards applications has given me a front-row seat to what's really happening on the ground. While everyone talks about ChatGPT Agent, I'm seeing businesses deploy agents that actually run operations: • A Dublin retailer built an AI that negotiates supplier contracts autonomously - saved thousands last year. • Cork manufacturing company has an agent that predicts machine failures 3 weeks early - zero unplanned downtime in 6 months   • Galway law firm created an AI that reviews 500-page contracts in 12 minutes - what used to take junior lawyers 2 days • Belfast customer service team deployed an agent that resolves 68% of tickets without human intervention • Limerick logistics company uses an AI that optimises delivery routes in real-time - cut fuel costs by 13% Here's the interesting part: These aren't tech companies. They're traditional Irish businesses that decided to stop waiting and start building. The most impressive part?  Most of these solutions cost less than hiring one additional employee, but deliver 10x the productivity gains. The uncomfortable truth:  Companies that are in the research loop are still "researching" AI strategy will be competing against businesses that already have AI employees working 24/7. Question for business leaders: Are you building AI agents or are your competitors building them faster? Drop your industry below - curious which sectors are moving quickest. 👇 #AIAgents #IrishBusiness #Automation #DigitalTransformation

  • View profile for Steve Rigby

    CEO, Rigby Group - a top 10 UK family business | Chair, Family Business UK | A leading voice championing UK private business, place-based philanthropy, and policies to drive economic growth

    14,334 followers

    The latest Institute of Directors survey makes for sobering reading. Business confidence has plummeted to −72, lower than even the Covid pandemic levels. This isn’t just about one sector or region struggling, either; we’re seeing revenue expectations change from +8 to −8, investment intentions fall to −27 and export expectations turn negative for the first time since records began. When only 8.1% of business leaders feel optimistic about the economic outlook and over 80% express pessimism, this survey can’t be ignored. There is a current disconnect between government objectives for growth and the reality facing business leaders: 85% of those surveyed believe that current policies will be unsuccessful in driving growth. However, I remain cautiously optimistic that this situation can be turned around. The survey clearly identifies that businesses need action on: 1. taxation (68%), 2. employment costs (64%) and 3. regulatory burden (48%). These aren’t unreasonable demands and, whilst we are unlikely to row back on the October 2024 tax changes, we can stop introducing more tax on business, we can ensure that the LPC considers the state of the economy and we can map a path for business to rebuild confidence. The government has an opportunity here to demonstrate that it’s listening, by providing clarity on future tax policy. We’ve seen confidence recover before from seemingly challenging positions. The key is acknowledging the severity of the current situation and taking decisive action to address the root causes. British businesses are resilient and innovative, but they need a stable, predictable environment in which to operate and invest. We have 12 weeks until the next Budget. If we are serious about growth, we can’t milk the cow of business any further; we need to nurture and love business to see growth. Whilst many of us are away, the Treasury team will be working hard on the next Budget, where I am hopeful that common sense will apply and we will realise that if we need to raise taxes, it has to be borne by the whole population and not just 1% of the shoulders in the senior business community.

  • View profile for Raphaël MANSUY

    Data Engineering | DataScience | AI & Innovation | Author | Follow me for deep dives on AI & data-engineering

    33,962 followers

    The Rise of Autonomous AI Agents: Transforming Knowledge Work with Language Models ... Researchers from Renmin University of China have published a survey on a new paradigm in AI: autonomous agents powered by large language models (LLMs). This study provides a taxonomy for constructing these agents and highlights their potential to revolutionize industries by automating complex cognitive tasks. 👉 A New Era of AI Assistants LLMs have demonstrated remarkable abilities in natural language understanding and generation. By integrating these models with key components like memory and planning modules, researchers can create autonomous agents capable of perceiving, reasoning, and acting to accomplish complex objectives. The proposed framework encompasses four modules: 1. Profiling: Defines the agent's role using methods like handcrafting, LLM-generation, or dataset alignment. 2. Memory: Enables agents to store and retrieve information using operations like reading, writing, and reflection. 3. Planning: Empowers agents to decompose tasks and generate plans using strategies like single-path reasoning, multi-path reasoning, and planning with feedback. 4. Action: Translates decisions into specific outputs by recalling memories or following plans, leveraging both internal LLM knowledge and external tools. LLM agents could automate a wide range of knowledge work and decision-making tasks, boosting productivity and innovation across sectors. The proposed framework offers a roadmap for designing more sophisticated AI assistants and chatbots. 👉 Early Killer Apps The survey showcases several promising applications of LLM agents: - Social science research: Analyzing datasets, generating hypotheses, and automating experiments.  - Software engineering: Code generation, debugging, and documentation. - Industrial automation: Optimizing manufacturing, predicting maintenance, and enabling flexible production. - Robotics: Enhancing robot perception, planning, and interaction capabilities. As the technology matures, we can expect to see more high-impact use cases emerge, improving efficiency, decision-making, and tackling previously intractable problems. 👉 The Road Ahead While the potential of LLM agents is vast, challenges remain: - Role-playing capability: Accurately simulating less common roles or capturing human psychology.  - Generalized human alignment: Aligning agents with diverse human values. - Prompt robustness: Improving resilience of complex prompt frameworks. - Hallucination: Mitigating false information generation. - Knowledge boundary: Constraining LLM knowledge to match human users. - Efficiency: Improving slow LLM inference speeds. Evaluating the safety and robustness of autonomous LLM agents is an open research question. As we refine these technologies and address the challenges, LLM agents could become indispensable tools, ushering in a new era of intelligent automation and discovery.

Explore categories