AI/ML

17 min read

Published on 01/18/2025
Last updated on 03/13/2025
How agent-oriented design patterns transform system development
Share
The development of agentic systems introduces a new layer of complexity due to the autonomous and interactive nature of AI agents. Traditional software design patterns, while applicable, often fall short in handling the dynamic and decentralized characteristics of agentic frameworks.
Traditional programming paradigms are largely derived from decades-old models and assumptions. They struggle to accommodate the fluid, adaptive, and often highly unpredictable nature of multi-agent applications. In a time when adaptability and scalability are key, this blog explores the interrelated challenges of these programming paradigms and presents design patterns needed for effective agentic applications.
Differences between traditional software design patterns and agentic systems
Deterministic control flows vs. emergent behaviors
Classic approaches assume a known sequence of operations, stable interfaces, and a predictable environment. In contrast, generative AI (GenAI) agents operate in dynamic contexts where their behavior emerges from interactions with each other and external data sources. Traditional paradigms are not designed to handle outcomes that cannot be neatly predicted in advance, making it cumbersome to adapt to situations where agents learn and evolve on the fly.
Rigid interfaces vs. Adaptive protocols
Traditional software engineering relies heavily on well-defined APIs, structured data schemas, and rigid communication protocols. Multi-agent GenAI applications, however, often require agents to negotiate or debate on the spot, dynamically refine their reasoning strategies, and reinterpret or transform data representations. This fluidity is challenging to represent or maintain in a rigidly typed, statically defined programming model.
Centralized decision-making vs. Distributed autonomy
Many existing programming models assume a single controlling logic that directs all components. GenAI multi-agent systems, however, are inherently decentralized. Each agent may have its own goals, learning algorithms, and decision-making processes, including its choice of large language model (LLM). Managing a set of independently evolving agents—each potentially running different models and strategies—calls for abstractions that support autonomy, collaboration, and negotiation rather than a top-down orchestration.
Synchronous execution vs. Concurrent, asynchronous interactions
Traditional paradigms, especially imperative and object-oriented models, tend to focus on synchronous call-and-response patterns. GenAI agents often interact concurrently, exchanging messages, updating internal states, and adapting strategies in real-time. Handling concurrent, asynchronous workflows is difficult in older paradigms, which are not inherently designed to reason about complex temporal dynamics or partial information.
Static reasoning vs. Continuous adaptation and learning
Classic software tends to encode logic in a static form—once compiled or deployed, the logic and decision structures remain stable unless explicitly updated by developers. GenAI agents continuously learn from new data, refine their internal models, and change their strategies based on evolving contexts. This requires programming abstractions that integrate learning loops, probabilistic reasoning, and adjustable model states directly into the development and runtime environments.
Well-defined errors vs. Uncertainty and partial knowledge
Traditional systems rely on well-defined contracts where errors and exceptions are anticipated and can be handled with known recovery strategies. GenAI systems, on the other hand, operate under uncertainty and incomplete information. Agents might provide probabilistic answers, contextually informed guesses, or heuristic-based decisions. Handling uncertainty, partial knowledge, and non-deterministic errors calls for paradigms that can gracefully incorporate statistical reasoning and robust fallback mechanisms rather than rely solely on explicit error conditions.
Single model of computation vs. Hybrid symbolic-statistical reasoning
Traditional programming tools excel at symbolic reasoning and deterministic logic flows. By contrast, GenAI agents often combine symbolic reasoning with statistical, gradient-based, or probabilistic methods. Achieving synergy between traditional code structures and advanced AI models (e.g., LLMs, reinforcement learning agents) demands a paradigm that can treat these models as first-class citizens, integrating their non-linear, probabilistic reasoning into the application’s architecture.
In essence, traditional programming paradigms are rooted in predictability, static structures, and top-down design. They are fundamentally misaligned with the inherently adaptive, emergent, and probabilistic nature of GenAI multi-agent ecosystems.
Embracing agentic design patterns
To build effective GenAI multi-agent applications, we need to incorporate uncertainty, concurrency, continuous learning, dynamic negotiation, and decentralized decision-making as first-order design principles.
As a result, several design patterns specific to agent systems have emerged, focusing on areas such as asynchronous tool orchestration, state management, failure handling, and adaptive goal reassignment. These patterns enable developers to structure agent-based systems more effectively, ensuring that they are both scalable and robust in production environments.
Tool orchestration patterns: Managing asynchronous tool execution
As agents frequently need to interact with external tools, services, or APIs to complete their tasks, this requires the management of multiple asynchronous calls, which introduces complexity in ensuring tool availability, coordinating results, and handling failures. Two patterns are commonly employed to manage these interactions:
Chained tool orchestration
In this pattern, agents invoke tools sequentially, passing the output of one tool as input to the next. Because each step depends on the previous step’s result, this approach is straightforward to implement. However, it can introduce bottlenecks if tools take a long time to run, as all subsequent steps must wait for the current step to complete. Chained orchestration is best suited for workflows where each step must be completed before the next can begin.
For example, consider an agent that troubleshoots IP Access Lists. This agent uses two tools:
- Retriever – Logs into the router, retrieves the configuration, and saves it in a memory store.
- Troubleshooter – Parses the access list configuration, identifies issues, and suggests improvements.
Here, the Troubleshooter relies on the configuration data retrieved by the Retriever, so a chained approach is appropriate.

Parallel tool orchestration
In scenarios where tool executions do not depend on one another, agents can invoke multiple tools in parallel. This approach reduces overall execution time by leveraging concurrent processing. However, it requires careful management of concurrency, error handling, and data synchronization. Once all parallel operations have completed, their results are aggregated for further analysis or decision making.
Extending the previous example, suppose the agent needs to troubleshoot access lists on multiple routers. The Retriever tool can be called in parallel for each router because these operations are independent. Then, as soon as a router’s configuration is retrieved, the Troubleshooter can be invoked to process that specific configuration without waiting for the others, further improving efficiency.

7 Benefits of tool orchestration
When integrating tool or function calling into an application through an API such as OpenAI—rather than making these calls in a traditional, hard-coded manner—developers can leverage a range of benefits linked to flexibility, adaptability, and cognitive leverage provided by LLMs:
- Natural language as a universal interface: Make no mistake, this represents a decisive turning point in how we conceive of and interact with computational tools. Instead of painstakingly tailoring code to conform to rigid input formats, we now embed our intentions, constraints, and objectives directly into language itself. With this shift, every facet of tool integration, from initial design to ongoing adaptation, becomes dramatically more intuitive, fluid, and responsive to change. What once demanded meticulous engineering overhead can now be guided by a model’s deep understanding of human language, radically transforming the very foundations of software orchestration
- Context-aware decisions: The model can interpret function-calling requests within a broader context of a conversation or a user’s intent. For instance, if a user’s query implies the need for data retrieval, the model can select the appropriate tool or function without the developer explicitly wiring every logic branch. This reduces the cognitive load on developers to anticipate every scenario beforehand.
- Dynamic adaptation: Traditional code must be updated, recompiled, or redeployed whenever new tools or functions are introduced. With an LLM-driven approach, you can present new function definitions to the model at runtime and have it start using them immediately. This adaptability speeds up iteration and integration cycles.
- Reduced engineering overhead: Instead of writing elaborate decision trees, glue code, or orchestration layers, the model’s reasoning capabilities can handle aspects like choosing the right tool, parsing user input, or adapting parameters. This cuts down on boilerplate code and reduces the complexity of managing large tool ecosystems.
- Robust error handling and validation: When function calls fail or return errors, the model can interpret the error messages, adjust its strategy, and try an alternative approach. It can reformulate a query, pass different parameters, or choose a different tool, reducing the need for strictly coded fallback logic. This built-in resilience can make the system more user-friendly and less brittle.
- Integration of reasoning and execution: Traditional programming often separates the “thinking” (i.e., logic) and “doing” (i.e., calling functions) stages. With an LLM acting as an orchestrator, the reasoning process (deciding which tool to use and how) and the execution process (actually calling that tool) are fluidly integrated. This results in a more responsive system that can align reasoning steps directly with actions taken, improving overall coherence and efficiency.
- Scalable complexity management: As the system grows to include many tools, specialized modules, or third-party APIs, coordinating them through a single LLM interface can scale more easily than traditional code-based dispatch logic. The model’s internal representations help it navigate complex toolsets without exponentially increasing code complexity.
State in multi-agent applications
In multi-agent systems (MAS), state refers to the comprehensive set of variables that define the condition of both individual agents and the overall system at any given time. This includes an agent’s internal parameters, such as beliefs, goals, and knowledge, and the shared attributes of the system’s environment. According to LangGraph documentation, state can be broadly categorized into individual agent state (specific to each agent) and global system state (the collective state of all agents and their environment.
Importance of state
- Decision-making: Agents use state information to make context-aware decisions. The current state provides the necessary context for selecting actions that align with objectives and adapt to environmental changes.
- Coordination and collaboration: Shared state information enables agents to synchronize their actions, facilitating cooperation and efficient resource sharing.
- Learning and adaptation: Agents update their states dynamically as they learn from new interactions, improving their future responses.
Uses of state
- Communication: Agents share state information to build a common understanding of their environment and intentions, aiding cooperation.
- Planning and forecasting: State data allows agents to simulate future scenarios, enabling proactive planning.
- Monitoring and control: System designers use state data to monitor the MAS, ensuring stability and intervening when necessary.
Agent State Patterns: Strategies for memory and state retention
Effective state management is fundamental to multi-agent systems. It underpins the ability of agents to make decisions, coordinate, adapt, and collectively function in complex environments. Several patterns are commonly used to handle agent state:
Ephemeral state
In an ephemeral state pattern, agents do not retain any information once a task is completed. Each new interaction is treated as a stateless operation, which simplifies the system’s architecture and is suitable for short-lived or simple tasks. However, it limits the agent’s ability to provide continuity for longer or more complex workflows.
Example: After the Router Agent finishes suggesting configuration improvements, it immediately discards the retrieved router configuration. Thus, no historical information is retained for future interactions.
Persistent state
For agents that must maintain context over multiple interactions, a persistent state pattern is used. Agent data is stored in external databases or in-memory data stores, allowing the agent to recall previous information for subsequent tasks. This pattern is especially useful in multi-turn dialogue systems, where retaining context across multiple conversations is essential.
Example: If the Router Agent offers a chat interface in which the user can ask follow-up questions about the router configuration across multiple sessions, the configuration data would be stored in a database. This allows the agent to retrieve and reference the information as needed, even long after the initial interaction.
State caching
A state caching pattern provides a hybrid approach, storing data only for the duration of a session and discarding it afterward. This strategy can boost performance by reducing the overhead of frequent reads and writes to an external store while avoiding the complexities of managing long-term data persistence.
Example: In this case, the Router Agent retains the router configuration in memory for the length of a single chat session. The user can pose multiple questions about the configuration during that session, but once the chat ends, the cached data is removed.
Fail-safe design: Handling agent failures and edge cases
Agents can encounter unexpected scenarios such as unresponsive tools, partial failures, or performance degradation. Failsafe strategies ensure that agents remain robust and continue functioning despite these challenges.
Fallback strategy
In a fallback strategy, agents have backup options when they fail to complete a task. For example, if an agent encounters an error while invoking a tool, it can switch to a backup tool or attempt a retry. This approach keeps the system resilient even when individual components fail.
Note: In practice, LLMs sometimes call the wrong tool due to reasons beyond the scope of this blog. Nevertheless, it is crucial to detect, and if possible, recover from such issues without user intervention.
Graceful degradation
When an agent experiences a partial failure or reduced performance, it should continue to operate in a limited capacity. This ensures that even if optimal functionality is not possible, the agent still provides a meaningful output rather than stopping altogether.
Example: Long-running requests can cause an LLM to take several minutes to generate a response. Strategies to handle this situation include:
- Streaming partial results to the user as they become available (OpenAI Streaming API).
- Prompting the user to decide whether to wait for completion or modify the query.
Timeout management
To prevent agents from becoming stuck when tools are slow or unresponsive, timeouts should be enforced on tool executions. If a tool fails to respond within a specified time, the agent can either retry or switch to a fallback tool.
Note: Timeout management is a well-known practice in network programming and is especially important for agents providing chat interfaces. Rate limits, token limits, and external API constraints (e.g., Spotify or Netflix rate limits) underscore the need for robust timeout and recovery mechanisms.
Dynamic goal reassignment: Adapting agents to changing task conditions
Agents often operate in environments where task conditions or goals may change in real-time. To adapt to these changes, agents must be able to reassign goals dynamically based on new inputs or external factors. Two main patterns are used for dynamic goal reassignment:
- Goal reevaluation: In this pattern, agents periodically reevaluate their goals based on the current state of the environment. If new information suggests that the original goal is no longer relevant or achievable, the agent can switch to a new goal. This is particularly useful in dynamic environments where conditions are constantly changing.
- Task delegation: When an agent is unable to achieve its assigned goal, it can delegate the task to another agent with the appropriate capabilities. This pattern is often used in multi-agent systems where different agents have specialized roles. Task delegation ensures that no single agent becomes a bottleneck, and the overall system remains efficient.
How agents adapt to evolving requirements
To illustrate how goal reassignment operates in practice, let’s consider some examples.
Example 1: Real-time data analysis
Scenario: A user chats with a “DataBot” to quickly analyze a CSV file of daily sales.
- The user instructs, “DataBot, load this sales.csv and create a line chart of daily sales.”
- The agent tries to use Tool A (a plotting library) but fails due to a version mismatch. It immediately reevaluates the goal of “create a line chart” and checks alternative ways to achieve it.
- The agent switches to Tool B, a fallback plotting service, successfully generating the line chart.
- The user then says, “Actually, show me the top five products by revenue in a bar chart instead.”
- Goal reevaluation occurs again— “line chart of daily sales” is no longer the goal. The agent updates its short-term objective to, “generate bar chart of top five products by revenue” and completes the task in the same session.
Why it matters: Within a single conversation, the agent repeatedly adapts to new objectives and tool failures, ensuring the user quickly gets the desired output without breaking continuity.
Example 2: Quick restaurant reservation and event planning
Scenario: A user employs a personal “evening out” agent to plan a date night within minutes.
- The user says, “Book a table for two at an Italian restaurant near me at 7p.m. tonight.”
- The agent searches local Italian restaurants. Right before finalizing, the user changes preferences, “Actually, let’s do sushi instead.”
- The agent reevaluates the goal from, “book an Italian restaurant,” to, “book a sushi restaurant,” immediately updating its reservation query. No multi-day planning required, this all happens in the same chat.
- Next, the user wants to add an activity, “Find a movie playing nearby around 9 p.m.”
- The agent doesn’t handle movie listings itself. It delegates that task to a specialized, “MovieFinder” agent, obtains the showtimes, and then returns to book the ticket, still within the same conversation.
Why It matters: In the span of a few messages, the agent’s goals pivot from booking Italian food to sushi, then to finding a movie. Task delegation to a specialized agent happens instantly, all in one session.
Example 3: Live coding assistant
Scenario: A user interacts with a coding assistant to generate, debug, and refactor code in a single chat.
- A user interacts with a coding assistant to generate, debug, and refactor code in a single chat.
- The users asks, “Write a Python function that parses JSON from a URL and returns the result.”
- After the agent provides the code, the user suddenly decides, “I actually need this in Java, not Python.” The agent reevaluates the goal from, “Python function,” to, “Java method.” Without skipping a beat, it regenerates the code in Java.
- Next, the assistant attempts to compile the code using an internal Java compiler tool, but an error occurs. The assistant delegates the debugging step to a specialized, “JavaDebug” sub-agent that can parse compiler messages and suggest fixes.
Why it matters: All this happens in one conversation, demonstrating real-time adaptation to new goals and the handing off of tasks that require specialized capabilities.
Example 4: Live weather and travel assignments
Scenario: A user interacts with a “Weather & Travel Bot” to plan a same-day trip.
- The user, “Check the weather in Chicago for this afternoon.”
- The agent reports it’s stormy. The user updates the goal, “I need a train ticket to Milwaukee instead, leaving in the next hour.”
- The agent immediately reevaluates, The, “weather in Chicago” goal is outdated, replaced by, “find a train to Milwaukee.” It queries the train schedules.
- Payment fails at the booking stage. The agent delegates the payment processing to a specialized “PaymentService” micro-agent. Once processed, it returns a confirmation number to the user, all within minutes.
Why it matters: The user changes objectives multiple times in a short interaction, showcasing how an agent must pivot instantly rather than following an extended, multi-day planning pipeline.
Key Takeaways:
- Goal reevaluation can happen mid-conversation, letting agents instantly adapt to user-driven changes or tool failures.
- Task delegation doesn’t have to be part of a large-scale, multi-agent system, it can occur in brief sessions, when a specialized micro-agent or service is needed to handle a subtask.
- Even short user interactions can benefit from dynamic goal reassignment, ensuring that agents remain flexible and responsive in real time.
Agent-oriented design patterns for GenAI systems
Agent-oriented design patterns are essential tools for developers looking to fully understand and master agentic systems. By leveraging patterns for tool orchestration, state management, and fail-safe design, and dynamic goal reassignment, developers can create scalable systems.
This blog is part of our series, Agentic Frameworks, a culmination of extensive research, experimentation, and hands-on coding with over 10 agentic frameworks and related technologies. Explore more insights by reading other blogs in this series:
- Why enterprises need to invest in agentic frameworks now
- Why agentic development is calling for a critical skills upgrade
References and sources
- Deterministic Control Flows vs. Emergent Behaviors:
- Bordini, R. H., El Fallah Seghrouchni, A., Hindriks, K., Logan, B., & Ricci, A. (2020). Agent programming in the cognitive era. Autonomous Agents and Multi-Agent Systems, 34(2), 1-31.
- Rigid Interfaces vs. Adaptive Protocols:
- Dinu, M.-C., Leoveanu-Condrei, C., Holzleitner, M., Zellinger, W., & Hochreiter, S. (2024). SymbolicAI: A framework for logic-based approaches combining generative models and solvers.
- Centralized Decision-Making vs. Distributed Autonomy:
- Park, J. S., et al. (2023). Generative Agents: Interactive Simulacra of Human Behavior.
- Synchronous Execution vs. Concurrent, Asynchronous Interactions:
- Li, S. (2016). Multi-agent System Design for Dummies.
- Static Reasoning vs. Continuous Adaptation and Learning:
- Sengar, S. S., Hasan, A. B., Kumar, S., & Carroll, F. (2024). Generative artificial intelligence: a systematic review and applications. Multimedia Tools and Applications, 83(1), 1-25.
- Well-Defined Errors vs. Uncertainty and Partial Knowledge:
- Spinellis, D. (2024). Pair Programming With Generative AI. IEEE Software, 41(3), 16-18.
- Single Model of Computation vs. Hybrid Symbolic-Statistical Reasoning:
- Dinu, M.-C., Leoveanu-Condrei, C., Holzleitner, M., Zellinger, W., & Hochreiter, S. (2024). SymbolicAI: A framework for logic-based approaches combining generative models and solvers.
- LangGraph Documentation: Low-Level Concepts - State.
- Russell, S., & Norvig, P. (2021). Artificial Intelligence: A Modern Approach. Pearson.
- Wooldridge, M. (2009). An Introduction to Multiagent Systems. Wiley.
- Stone, P., & Veloso, M. (2000). Multiagent Systems: A Survey from a Machine Learning Perspective.
- Shoham, Y., & Leyton-Brown, K. (2008). Multiagent Systems: Algorithmic, Game-Theoretic, and Logical Foundations. MIT Press.
- Durfee, E. H., Lesser, V. R., & Corkill, D. D. (1989). Cooperative Distributed Problem Solving. AAAI Conference Proceedings.
- Jennings, N. R., Sycara, K., & Wooldridge, M. (1998). A Roadmap of Agent Research and Development. Autonomous Agents and Multi-Agent Systems.

Get emerging insights on innovative technology straight to your inbox.
Welcome to the future of agentic AI: The Internet of Agents
Outshift is leading the way in building an open, interoperable, agent-first, quantum-safe infrastructure for the future of artificial intelligence.

* No email required
Related articles
The Shift is Outshift’s exclusive newsletter.
The latest news and updates on generative AI, quantum computing, and other groundbreaking innovations shaping the future of technology.
