Agentic AI Design Patterns are emerging as the backbone of real-world, production-grade AI systems, and this is gold from Andrew Ng Most current LLM applications are linear: prompt → output. But real-world autonomy demands more. It requires agents that can reflect, adapt, plan, and collaborate, over extended tasks and in dynamic environments. That’s where the RTPM framework comes in. It's a design blueprint for building scalable agentic systems: ➡️ Reflection ➡️ Tool-Use ➡️ Planning ➡️ Multi-Agent Collaboration Let’s unpack each one from a systems engineering perspective: 🔁 1. Reflection This is the agent’s ability to perform self-evaluation after each action. It's not just post-hoc logging—it's part of the control loop. Agents ask: → Was the subtask successful? → Did the tool/API return the expected structure or value? → Is the plan still valid given current memory state? Techniques include: → Internal scoring functions → Critic models trained on trajectory outcomes → Reasoning chains that validate step outputs Without reflection, agents remain brittle, but with it, they become self-correcting systems. 🛠 2. Tool-Use LLMs alone can’t interface with the world. Tool-use enables agents to execute code, perform retrieval, query databases, call APIs, and trigger external workflows. Tool-use design involves: → Function calling or JSON schema execution (OpenAI, Fireworks AI, LangChain, etc.) → Grounding outputs into structured results (e.g., SQL, Python, REST) → Chaining results into subsequent reasoning steps This is how you move from "text generators" to capability-driven agents. 📊 3. Planning Planning is the core of long-horizon task execution. Agents must: → Decompose high-level goals into atomic steps → Sequence tasks based on constraints and dependencies → Update plans reactively when intermediate states deviate Design patterns here include: → Chain-of-thought with memory rehydration → Execution DAGs or LangGraph flows → Priority queues and re-entrant agents Planning separates short-term LLM chains from persistent agentic workflows. 🤖 4. Multi-Agent Collaboration As task complexity grows, specialization becomes essential. Multi-agent systems allow modularity, separation of concerns, and distributed execution. This involves: → Specialized agents: planner, retriever, executor, validator → Communication protocols: Model Context Protocol (MCP), A2A messaging → Shared context: via centralized memory, vector DBs, or message buses This mirrors multi-threaded systems in software—except now the "threads" are intelligent and autonomous. Agentic Design ≠ monolithic LLM chains. It’s about constructing layered systems with runtime feedback, external execution, memory-aware planning, and collaborative autonomy. Here is a deep-dive blog is you would like to learn more: https://lnkd.in/dKhi_n7M
Frameworks for Developing Goal-Oriented LLMs
Explore top LinkedIn content from expert professionals.
Summary
Frameworks for developing goal-oriented LLMs are specialized systems that help large language models (LLMs) work toward specific objectives, manage complex tasks, and make decisions beyond simple prompt-response interactions. These frameworks enable LLMs to plan, reason, collaborate, and self-correct, turning them into more autonomous and purpose-driven AI agents.
- Apply structured reasoning: Use frameworks like Chain-of-Thought and Diagram of Thought to guide LLMs through step-by-step or multi-path reasoning processes, allowing them to tackle challenging problems with more depth and accuracy.
- Integrate planning and memory: Choose architectures that allow LLMs to break down goals, sequence tasks, and update plans, while maintaining memory across different stages for more reliable long-term workflows.
- Enable collaboration and tool use: Build systems where LLMs can interact with specialized agents and external tools for retrieving information, executing actions, and refining outputs, resulting in more dynamic and goal-driven AI solutions.
-
-
Researchers from Tsinghua University and Shanghai AI Laboratory have introduced a groundbreaking framework called the "Diagram of Thought" (DoT). Diagram of Thought (DoT) models reasoning as a directed acyclic graph (DAG) within a single LLM, incorporating propositions, critiques, and refinements, whereas Chain-of-Thought (CoT) represents reasoning as a linear sequence of steps. Here are the steps on how the Diagram of Thought (DoT) framework is implemented and used: • 1. Framework Setup 1. Design the LLM architecture to support role-specific tokens (<proposer>, <critic>, <summarizer>). 2. Train the LLM on examples formatted with the DoT structure, including these role-specific tokens and DAG representations. • 2. Reasoning Process 1. Initialization: Present the problem or query to the LLM. 2. Proposition Generation: - The LLM, in the <proposer> role, generates an initial proposition or reasoning step. - This proposition becomes a node in the DAG. 3. Critique Phase: - The LLM switches to the <critic> role. - It evaluates the proposition, identifying any errors, inconsistencies, or logical fallacies. - The critique is added as a new node in the DAG, connected to the proposition node. 4. Refinement: - If critiques are provided, the LLM returns to the <proposer> role. - It generates a refined proposition based on the critique. - This refined proposition becomes a new node in the DAG, connected to both the original proposition and the critique. 5. Iteration: - Steps 3 and 4 repeat until propositions are verified or no further refinements are needed. - Each iteration adds new nodes and edges to the DAG, representing the evolving reasoning process. 6. Summarization: - Once sufficient valid propositions have been established, the LLM switches to the <summarizer> role. - It synthesizes the verified propositions into a coherent chain of thought. - This process is analogous to performing a topological sort on the DAG. 7. Output: The final summarized reasoning is presented as the answer to the original query. • 3. Mathematical Formalization 1. Represent the reasoning DAG as a diagram D in a topos E. 2. Model propositions as subobjects of the terminal object in E. 3. Represent logical relationships and inferences as morphisms between propositions. 4. Model critiques as morphisms to the subobject classifier Ω. 5. Use PreNet categories to capture both sequential and concurrent aspects of reasoning. 6. Take the colimit of the diagram D to aggregate all valid reasoning steps into a final conclusion. • 4. Implementation and Deployment 1. Integrate the DoT framework into the LLM's training process, focusing on role transitions and DAG construction. 2. During inference, use auto-regressive next-token prediction to generate content for each role and construct the reasoning DAG. 3. Implement the summarization process to produce the final chain-of-thought output.
-
One of the most promising directions in software engineering is merging stateful architectures with LLMs to handle complex, multi-step workflows. While LLMs excel at one-step answers, they struggle with multi-hop questions requiring sequential logic and memory. Recent advancements, like O1 Preview’s “chain-of-thought” reasoning, offer a structured approach to multi-step processes, reducing hallucination risks—yet scalability challenges persist. Configuring FSMs (finite state machines) to manage unique workflows remains labor-intensive, limiting scalability. Recent studies address this from various technical approaches: 𝟏. 𝐒𝐭𝐚𝐭𝐞𝐅𝐥𝐨𝐰: This framework organizes multi-step tasks by defining each stage of a process as an FSM state, transitioning based on logical rules or model-driven decisions. For instance, in SQL-based benchmarks, StateFlow drives a linear progression through query parsing, optimization, and validation states. This configuration achieved success rates up to 28% higher on benchmarks like InterCode SQL and task-based datasets. Additionally, StateFlow’s structure delivered substantial cost savings—lowering computation by 5x in SQL tasks and 3x in ALFWorld task workflows—by reducing unnecessary iterations within states. 𝟐. 𝐆𝐮𝐢𝐝𝐞𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧 𝐅𝐫𝐚𝐦𝐞𝐰𝐨𝐫𝐤𝐬: This method constrains LLM output using regular expressions and context-free grammars (CFGs), enabling strict adherence to syntax rules with minimal overhead. By creating a token-level index for constrained vocabulary, the framework brings token selection to O(1) complexity, allowing rapid selection of context-appropriate outputs while maintaining structural accuracy. For outputs requiring precision, like Python code or JSON, the framework demonstrated a high retention of syntax accuracy without a drop in response speed. 𝟑. 𝐋𝐋𝐌-𝐒𝐀𝐏 (𝐒𝐢𝐭𝐮𝐚𝐭𝐢𝐨𝐧𝐚𝐥 𝐀𝐰𝐚𝐫𝐞𝐧𝐞𝐬𝐬-𝐁𝐚𝐬𝐞𝐝 𝐏𝐥𝐚𝐧𝐧𝐢𝐧𝐠): This framework combines two LLM agents—LLMgen for FSM generation and LLMeval for iterative evaluation—to refine complex, safety-critical planning tasks. Each plan iteration incorporates feedback on situational awareness, allowing LLM-SAP to anticipate possible hazards and adjust plans accordingly. Tested across 24 hazardous scenarios (e.g., child safety scenarios around household hazards), LLM-SAP achieved an RBS score of 1.21, a notable improvement in handling real-world complexities where safety nuances and interaction dynamics are key. These studies mark progress, but gaps remain. Manual FSM configurations limit scalability, and real-time performance can lag in high-variance environments. LLM-SAP’s multi-agent cycles demand significant resources, limiting rapid adjustments. Yet, the research focus on multi-step reasoning and context responsiveness provides a foundation for scalable LLM-driven architectures—if configuration and resource challenges are resolved.
-
𝐑𝐀𝐆 𝐢𝐬 𝐞𝐯𝐨𝐥𝐯𝐢𝐧𝐠… 𝐅𝐚𝐬𝐭. We’re moving beyond static document retrieval into systems that reason, plan, and adapt. This shift marks the rise of Agentic RAG, and it changes how AI delivers value. 𝐓𝐫𝐚𝐝𝐢𝐭𝐢𝐨𝐧𝐚𝐥 𝐑𝐀𝐆 (𝐑𝐞𝐭𝐫𝐢𝐞𝐯𝐚𝐥-𝐀𝐮𝐠𝐦𝐞𝐧𝐭𝐞𝐝 𝐆𝐞𝐧𝐞𝐫𝐚𝐭𝐢𝐨𝐧): 1. A user asks a question 2. The system fetches relevant documents 3. Relevant documents set the context 3. The LLM responds with a summary It’s efficient for factual Q&A but not built for reasoning or dynamic problem solving. Think of it as handing a model a textbook and asking for a summary. Functional but shallow. 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐑𝐀𝐆 𝐢𝐬 𝐝𝐢𝐟𝐟𝐞𝐫𝐞𝐧𝐭. You don’t ask a question. You give it a Goal. The system understands, retrieves, reasons, plans, uses tools, and loops back until it converges on a meaningful outcome. Introduces new capabilities like: - Goal decomposition - Planning and Memory - Iterative Retrieval with Tool use - Feedback driven improvement This is no longer just "retrieve and summarize." It is cognitive orchestration for enterprise scale intelligence. 𝐄𝐱𝐚𝐦𝐩𝐥𝐞: “Evaluate the best go-to-market strategy for launching our SaaS product in North America.” - 𝐓𝐫𝐚𝐝𝐢𝐭𝐢𝐨𝐧𝐚𝐥 𝐑𝐀𝐆: Retrieves blog posts and reports, then Summarizes them. - 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐑𝐀𝐆: Breaks down the objective into sub goals, analyzes regional market trends, customer behavior, competitor moves, pricing models, and regulatory factors, then returns a strategy recommendation tailored to your product and region. Agentic RAG opens up Intent Driven Reasoning. LLMs that act with purpose, not just respond to prompts. 𝐇𝐚𝐯𝐞 𝐲𝐨𝐮 𝐬𝐭𝐚𝐫𝐭𝐞𝐝 𝐛𝐮𝐢𝐥𝐝𝐢𝐧𝐠 𝐰𝐢𝐭𝐡 𝐀𝐠𝐞𝐧𝐭𝐢𝐜 𝐑𝐀𝐆? #AgenticRAG #LLM #RAG
-
A Reading List for Agentic AI Architecture The paradigm in Generative AI is shifting from isolated prompt engineering to the development of autonomous Agentic Systems. For AI engineers and researchers, this transition requires mastering a new set of cognitive architectures that enable Large Language Models (LLMs) to reason, plan, execute, and self-correct. To navigate this shift effectively, it is essential to move beyond the basics and study the foundational papers that are defining this new stack. Below is a curated roadmap of the essential literature and resources for building Agentic AI, categorized by function. 1. Reasoning & Deliberation (How Agents Think) Before an agent acts, it must be able to decompose complex problems. Chain-of-Thought (CoT): The foundational technique for step-by-step reasoning. Paper: https://lnkd.in/ePMfBaiW Tree of Thoughts (ToT): Enables exploration of multiple reasoning paths and backtracking. Paper: https://lnkd.in/eQ2EKNaK Parsel: Algorithmic reasoning for hierarchical code generation. Paper: https://lnkd.in/ect7eZnq 2. Execution & Action (How Agents Interfact) Reasoning is only valuable when coupled with the ability to manipulate the external world. ReAct Framework: Synergizing reasoning and acting in language models. This is currently the industry standard pattern. Paper: https://lnkd.in/eE3Nhfr2 Function Calling: The mechanism allowing LLMs to deterministically execute code and API calls. Documentation: https://lnkd.in/eDVFeEUi 3. Architecture & Reliability (How Agents Improve) Building robust systems requires memory, planning, and self-correction mechanisms. Reflexion (Self-Correction): Using verbal reinforcement to allow agents to learn from their own failures without fine-tuning. Paper: https://lnkd.in/eY2MfMZg Generative Agents (Memory): Architectures for long-term memory and believable behavior (The "Sims" paper). Paper: https://lnkd.in/epxnQfjj LLM+P (Planning): Offloading complex planning to classical solvers (PDDL) while using the LLM as a translator. Paper: https://lnkd.in/evezb-cx 4. Advanced Systems (Multi-Agent & Optimization) Multi-Agent Collaboration: Frameworks for orchestrating diverse agents to solve complex tasks. Resource (Microsoft AutoGen): https://lnkd.in/eKmVZ9NW Process-Supervised Reward Models (PRM): Rewarding the process of reasoning rather than just the outcome. Paper: https://lnkd.in/ekfWepWk This list represents the current "syllabus" for advanced AI engineering. #ArtificialIntelligence #MachineLearning #AgenticAI #LLMs
-
Impressive survey on agentic reasoning for LLMs. (bookmarks this one) 135+ pages! Why does it matter? LLMs reason well in closed-world settings, but they struggle in open-ended, dynamic environments where information evolves. The missing piece is action. This is because static reasoning without interaction cannot adapt, learn, or improve from feedback. This new survey systematizes the paradigm of Agentic Reasoning, where LLMs are reframed as autonomous agents that plan, act, and learn through continual interaction with their environment. It provides a unified roadmap that bridges thoughts and actions, offering actionable guidance for building agentic systems across environmental dynamics and optimization settings. The framework organizes agentic reasoning along three complementary dimensions: 1. Foundational Agentic Reasoning: Core single-agent capabilities including planning, tool use, and search. Agents decompose goals, invoke external tools, and verify results through executable actions. This is the bedrock. 2. Self-Evolving Agentic Reasoning: How agents improve through feedback, memory, and adaptation. Rather than following fixed reasoning paths, agents develop mechanisms for reflection, critique, and memory-driven learning. Reflexion, RL-for-memory, and continual adaptation link reasoning with learning. 3. Collective Multi-Agent Reasoning: Scaling intelligence from isolated solvers to collaborative ecosystems. Multiple agents coordinate through role assignment, communication protocols, and shared memory. Debate, disagreement resolution, and consistency through multi-turn interactions. Across all layers, the survey distinguishes two optimization modes: in-context reasoning (scaling inference-time compute through orchestration and search without parameter updates) and post-training reasoning (internalizing strategies via RL and fine-tuning). The survey covers applications spanning math exploration, scientific discovery, embodied robotics, healthcare, and autonomous web research. It also reviews the benchmark landscape for evaluating agentic capabilities. I have been looking closely at this area of research, and here are some of the open challenges that remain: personalization, long-horizon interaction, world modeling, scalable multi-agent training, and governance frameworks for real-world deployment.
-
The 4 Agent Frameworks That Will Define AI Systems in 2026 and Why They Matter By 2026, the most important question in AI won’t be: “Which LLM is the most powerful?” It’ll be: “Which agent framework enables scalable, coordinated, production-ready intelligence?” Because the next era of AI won’t be driven by bigger models it will be driven by LLM agents, multi-agent orchestration, and systems-level reasoning. Here are the frameworks leading that shift: 1, LangGraph • Graph-native, stateful agent architecture • Built for persistent memory, multi-agent control, and complex workflows 2, CrewAI • Role-based agent coordination • Enables structured teamwork across planning, writing, analysis, and execution 3. AutoGen • Dialogue-first reasoning framework • Ideal for research automation, interactive assistants, and iterative problem-solving 4. MetaGPT • Simulates full software teams (PM, Dev, QA) • Designed for end-to-end autonomous product development Why This Is a Major Shift in AI Development We’re moving from single-step LLM outputs to agent ecosystems with: • Shared context • Delegation and role assignment • Memory modules • Feedback loops • Planning, reasoning, and re-planning • Self-improving behaviors In other words: LLMs are becoming components, not complete solutions. And the frameworks you choose today will determine the intelligence, autonomy, and reliability your AI systems can achieve tomorrow. This is the foundation of the next generation of AI engineering, agentic workflows, and LLM-powered automation, and it’s already reshaping how teams build. 🔁 Repost If this expanded your perspective on where AI agents are heading, so others can stay ahead. 👉Follow Gabriel Millien for deeper insights on LLM agents, multi-agent architectures, AI infrastructure, and agent design patterns.
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning