In this topic, we'll take a closer look at the Act phase of the agentic loop. This is the stage where the agent moves from thinking to doing. After the agent determines the best next action during the Thought phase, the Act phase is where that decision is actually carried out. This is what allows the agent to have real impact – whether that means calling a tool, sending a request, updating data, or performing another operation.
Key characteristics
First, let's look at how this appears in practice. Below is an example from a meeting-scheduling agent:
[Thought]: Now I know the date. Next, I need to check calendar availability.
I'll query both calendars for October 7, 2025.
[Act]: Call tool "check_calendar_availability" with parameters:
* user: "me"
* contact: "Riley Morgan"
* date: "2025-10-07"
* duration: 60 minutesIn this case, the Act phase is the concrete execution of the check_calendar_availability tool with the specified parameters. It's the operational step that turns a plan into a real-world result.
Key characteristics of the Act phase:
Deterministic execution: Once the model decides on an action, the execution is performed according to predefined rules. The system doesn't "guess", it follows the procedure you've designed.
External interaction: This is where the agent reaches beyond its own internal reasoning to interact with tools, APIs, databases, services, or devices.
State change: Actions can update the world (such as creating a meeting) or at least update what the agent knows (such as retrieving information from a calendar).
How does Act work under the hood?
Let's return to our simplified agent implementation to see where Act fits:
def agent_loop(user_query, system_prompt, tools):
conversation_history = [{"role": "user", "content": user_query}]
while True:
# THOUGHT: LLM decides what to do next
llm_response = call_llm(system_prompt, conversation_history, tools)
conversation_history.append({"role": "agent", "content": llm_response})
if llm_response.has_tool_calls():
# ⭐ ACT: Execute the tool ⭐
tool_result = execute_tool(llm_response.tool_call)
# OBSERVE: Add result to history
conversation_history.append({"role": "tool", "content": tool_result})
else:
return llm_response.contentThe Act phase appears as a single line:
tool_result = execute_tool(llm_response.tool_call)But this line hides significant complexity! Let's unpack what's really happening. Here's an important point: the LLM doesn't execute functions directly. Instead, the process works like this:
The LLM analyzes available tools and decides which one to use
The LLM returns the function name and parameters in a structured format
Our code actually executes the function
We return the results back to the LLM
This is a fundamental principle of how agents work. The LLM is the "brain" that plans, but the execution happens in our runtime environment.
Tool calling (function calling)
In the simplest implementation, you could describe available tools in the system prompt as plain text:
SYSTEM_PROMPT += """
You have access to the following tools:
- get_current_date(): Returns the current date
- check_calendar_availability(user, contact, date, duration): Checks calendar
- create_meeting(attendees, date, start_time, duration, title): Creates a meeting
When you need to use a tool, respond with: TOOL_CALL: function_name(param1, param2)
"""However, modern LLMs support a more robust abstraction called tool calling (also known as function calling). This is a structured way to define and call functions.
Tool definition:
Instead of describing tools in plain text, we define them as structured schemas:
tools = [
{
"name": "check_calendar_availability",
"description": "Checks calendar availability for two people on a specific date",
"parameters": {
"type": "object",
"properties": {
"user": {
"type": "string",
"description": "The primary user's identifier"
},
"contact": {
"type": "string",
"description": "The contact person's name or identifier"
},
"date": {
"type": "string",
"description": "Date in YYYY-MM-DD format"
},
"duration": {
"type": "integer",
"description": "Meeting duration in minutes"
}
},
"required": ["user", "contact", "date", "duration"]
}
}
]These tool definitions are passed to the LLM along with the conversation. The LLM can then "decide" to call a tool by returning a structured response.
LLM response with tool call:
When the LLM wants to use a tool, it returns a response like this:
{
"role": "assistant",
"content": null,
"tool_calls": [
{
"id": "call_abc123",
"type": "function",
"function": {
"name": "check_calendar_availability",
"arguments": {
"user": "me",
"contact": "Riley Morgan",
"date": "2025-10-07",
"duration": 60
}
}
}
]
}Note that:
The
contentfield is null because the LLM isn't providing text – it's requesting a tool call. Based on this fieldllm_response.has_tool_calls()makes a decision.The tool call has a unique ID for tracking
Arguments are provided as a structured object, not plain text
Implementing the Act phase
Now let's look at how we actually implement tool execution:
def execute_tool(tool_call):
"""
Execute a tool call and return the result.
This is the Act phase!
"""
function_name = tool_call.function.name
arguments = tool_call.function.arguments
# Map function names to actual implementations
available_tools = {
"get_current_date": get_current_date,
"check_calendar_availability": check_calendar_availability,
"create_meeting": create_meeting
}
# Get the function
if function_name not in available_tools:
return {"error": f"Unknown tool: {function_name}"}
function = available_tools[function_name]
# Execute the function with provided arguments
try:
result = function(**arguments)
return {"success": True, "data": result}
except Exception as e:
return {"success": False, "error": str(e)}This code:
Extracts the function name and arguments from the tool call
Maps the function name to an actual Python function
Executes the function with the provided arguments
Handles errors gracefully
Returns the result in a structured format
Why good tool descriptions matter
For the LLM to call the right tool with the right parameters, you need to provide clear, detailed descriptions. This is crucial!
Bad tool description:
{
"name": "check_calendar",
"description": "Checks calendar",
"parameters": {
"type": "object",
"properties": {
"data": {"type": "string"}
}
}
}Problems:
Unclear what "checks calendar" means
Vague parameter name "data"
No indication of what format the data should be in
No examples
Good tool description:
{
"name": "check_calendar_availability",
"description": "Checks mutual calendar availability between two people for a meeting. Returns available time slots that work for both parties on the specified date.",
"parameters": {
"type": "object",
"properties": {
"user": {
"type": "string",
"description": "The primary user's email or identifier (e.g., 'me', '[email protected]')"
},
"contact": {
"type": "string",
"description": "The contact person's full name or email address"
},
"date": {
"type": "string",
"description": "Target date in ISO format (YYYY-MM-DD), e.g., '2025-10-07'"
},
"duration": {
"type": "integer",
"description": "Desired meeting duration in minutes. Common values: 30, 60, 90"
}
},
"required": ["user", "contact", "date", "duration"]
}
}Benefits:
Clear description of what the tool does
Each parameter has a detailed description
Expected formats are specified
Examples are provided
Required parameters are marked
The quality of your tool descriptions directly impacts how well your agent works!
Conclusion
The Act phase may seem simple at first glance, but it's actually one of the most important parts of the agentic loop. This is where the agent's plans turn into real effects. Here are the key takeaways from the topic:
The LLM doesn't execute functions – it only returns function names and parameters. We are responsible for the actual execution.
Tool calling (function calling) is a structured abstraction for defining and using tools, more robust than plain text descriptions.
Good tool descriptions are critical – they directly impact the agent's ability to choose the right tool and provide correct parameters.
The Act phase is deterministic – once we receive the function call from the LLM, execution follows defined rules in our code.
Available tools should be passed to the LLM – in the simplest version through the system prompt, but better through the structured tool calling API.