In the previous topic, we gained a basic understanding of what components make up an agent. Now, let's examine the general structure of how an agent operates.
Understanding the agentic loop
There's a widely used abstraction for explaining this process called the agentic loop, which consists of three core steps:
Thought → Act → Observe
The complete flow looks like this:
Thought: Using all available data, the agent plans its next action. It analyzes the current state, evaluates available tools, and decides what to do next.
Act: The agent executes the action determined in the previous step. This might involve calling a tool, making an API request, or performing a computation.
Observe: The agent collects and processes the results of the executed action. It examines the outcome and determines if it has achieved the goal.
Example: scheduling agent
Let's walk through this with a practical example of an agent that schedules meetings.
Input query: "Please schedule a meeting with Riley Morgan tomorrow for one hour."
[Input Query] "Please schedule a meeting with Riley Morgan tomorrow for one hour."
Cycle 1:
[Thought]: To schedule this meeting, I need to:
1. Determine tomorrow's date
2. Check my calendar for available time slots
3. Check Riley Morgan's calendar for availability
4. Find a mutual free slot that's at least one hour long
5. Create the meeting invite
First, I should check what tomorrow's date is and then query the calendar system.
[Act]: Call tool "get_current_date" to determine tomorrow's date
[Observe]: Current date is October 6, 2025. Tomorrow is October 7, 2025.Cycle 2:
[Thought]: Now I know the date. Next, I need to check calendar availability. I'll query both calendars for October 7, 2025.
[Act]: Call tool "check_calendar_availability" with parameters:
* user: "me"
* contact: "Riley Morgan"
* date: "2025-10-07"
* duration: 60 minutes
[Observe]: Available mutual time slots:
* 10:00-11:00 AM
* 2:00-3:00 PM
* 4:00-5:00 PMCycle 3:
[Thought]: I found available slots. I'll select the earliest available time (10:00 AM) and create the meeting.
[Act]: Call tool "create_meeting" with parameters:
* attendees: ["me", "Riley Morgan"]
* date: "2025-10-07"
* start_time: "10:00"
* duration: 60
* title: "Meeting with Riley Morgan"
[Observe]: Meeting created successfully. Calendar invite sent to Riley Morgan.Finishing cycle:
[Thought]: The meeting has been successfully scheduled. The goal is achieved.`
[Output]: I've scheduled a one-hour meeting with Riley Morgan for tomorrow, October 7, 2025, at 10:00 AM. A calendar invite has been sent.How is this implemented?
Let's move from the abstract concept to examine how this works in practice. Here's a simplified implementation of an agent in Python.
def agent_loop(user_query, system_prompt, tools):
"""
Core agentic loop: Thought → Act → Observe
"""
conversation_history = [{"role": "user", "content": user_query}]
while True:
# THOUGHT: LLM decides what to do next
llm_response = call_llm(system_prompt, conversation_history, tools)
conversation_history.append({"role": "agent", "content": llm_response})
# Check if LLM wants to use a tool
if llm_response.has_tool_calls():
# ACT: Execute the tool
tool_result = execute_tool(llm_response.tool_call)
# OBSERVE: Add result to history
conversation_history.append({"role": "tool", "content": tool_result})
# Loop continues → back to THOUGHT
continue
else:
# No tool call → final answer reached
return llm_response.contentThought: This phase involves an LLM call. The LLM handles the planning process. We shape the overall behavior through the system prompt and provide available tools to the LLM. To maintain context, we include the entire conversation history with each call.
# Notice that we pass every required detail to an LLM call
llm_response = call_llm(system_prompt, conversation_history, tools)Action: First, the model's output determines if an action takes place. The model returns the function name and required parameters in a structured format. We handle the function execution ourselves – the LLM doesn't directly execute anything but rather instructs us what to do and waits for results.
# Executing the tool is our responsibility
tool_result = execute_tool(llm_response.tool_call)Observation: We implement this phase in multiple places. We add function execution results to the conversation history, along with the initial user message and the agent's reasoning. By passing the conversation history to the LLM, we ensure it knows the original request, the agent's previous plans, which tools it called, and what results it obtained. This creates the agent's "memory" – its ability to act consistently and methodically.
Memory: The memory mechanism works by passing all conversation history messages to the LLM. We can expand this approach by adding relevant information or storing and passing user information across sessions.
conversation_history = [{"role": "user", "content": user_query}]
while True:
...
conversation_history.append({"role": "agent", "content": llm_response})
if llm_response.has_tool_calls():
...
conversation_history.append({"role": "tool", "content": tool_result})Loop: While the loop is infinite, practical constraints like time limits, token counts, or cycle numbers may limit the agent's operation.
Conclusion
The agentic loop is a simple but powerful pattern:
Thought: The LLM reasons about what to do next based on all available context
Act: The agent executes an action through a tool call or response
Observe: Results are collected and added to context
Repeat: The loop continues until the agent achieves the goal or meets a specific condition
This architecture enables LLMs to break down complex tasks, use external tools, and work toward goals through iterative reasoning and action. The key insight is that by giving the model access to tools and a structured approach to use them, we transform a simple text generator into an agent that can complete real-world tasks.