Generative AIPrompt engineeringFundamentals of prompt engineering

Anatomy of a prompt

A single word or sentence is usually enough to get a response from an LLM. You ask a quick question and receive an immediate answer. However, if you want consistently high-quality, customized, and reliable results, a basic query won't be enough.

In this topic, we'll examine the essential elements of a well-crafted prompt—the building blocks that make you a more effective prompt engineer.

The persona

Have you ever wondered how to make LLMs choose slightly different words, structure their sentences differently, or approach a problem from a specific angle? Adding personality to your prompts helps you do exactly that. By default, LLMs are designed to be helpful, neutral, and somewhat generic.

A personality or persona is a character or identity you give to the model to override this default neutrality. Think of the LLM as a skilled method actor. Without a script or a character description, they will just read the lines flatly. But if you cast them in a role—giving them a backstory, traits, and a way of speaking—they transform the performance entirely.

Put more formally, the persona defines the role, expertise, or style the model should adopt—such as a "friendly chess tutor," "expert financial advisor," or "fictional character." Imagine you're developing a virtual assistant for a travel agency. You want it to not only provide accurate information but also reflect the friendly and professional tone of your brand:

You are an experienced travel advisor with a friendly, high-energy, and approachable tone. You love encouraging people to explore the world.

From the above prompt, you can see that a persona does three things:

  1. Sets the tone: it tells the model to use enthusiastic adjectives ("amazing," "breathtaking") rather than dry descriptions.

  2. Defines the expertise/qualities: as an "experienced travel advisor," the model is more likely to offer helpful tips or logistical advice.

  3. Ensures consistency: the persona prevents the model from suddenly sounding like a tax auditor if you want it to be "fun". You can also add some quirks or catchphrases the model should use.

You will typically provide the persona as part of the system message. This ensures that the model sticks to its role throughout the entire conversation. It prevents the model from "breaking character" or reverting to its default neutral style as the chat progresses.

Context

While the persona defines who the model is, the context defines where the model operates and what it knows. Imagine trying to follow a movie plot starting halfway through—you would miss the setting and key facts. LLMs are stateless by design. This means they have no memory of past interactions or your current specific situation unless you explicitly give them that information.

Context provides relevant facts and history to narrow the model's vast knowledge down to your specific task:

The user's travel profile highlights an interest in martial arts and unique cultural experiences. Company policy mandates that all itineraries must include three distinct cultural experiences. Additionally, a recent pricing sheet confirms the Kamakura day trip requires a dedicated travel allowance.

Without context, models often guess, leading to generic or entirely irrelevant outputs. Think about asking the model to "fix the bug." Without knowing the programming language, the exact error message, or the code snippet, it cannot effectively help you. Providing background information bridges the gap between the model's general training data and your specific problem.

Various context sources for models: chat history, documents, and web search.

The most immediate source of context is the conversation history. Modern chat interfaces automatically send previous messages back to the model with your new prompt, creating the illusion of continuous memory. This is vital because it allows the model to correctly understand references and maintain a smooth, logical flow across multiple turns. However, this "context window" is limited. If the conversation goes on for too long, the model may "forget" the earliest messages.

Context can also come from external data sources, known as knowledge bases. This is useful when you want the model to use your company's private, up-to-date information, not just its general training data. To achieve this, you inject specific documents or manuals directly into the prompt. This process, known as Retrieval-Augmented Generation (RAG), enriches the prompt with your data, yielding accurate results.

In advanced setups, models can access APIs, web search, or code execution environments to gather real-time data. When you ask for the current stock price, the model retrieves the current information and immediately appends the output to the ongoing context. This helps the model produce the required output.

Task statement

After defining who the model is (the persona) and what foundational knowledge it has (the context), you must then define the action. The task statement is a clear instruction that tells the model exactly what you want it to produce. It acts as the mission objective, turning the setup and background information into a concrete, executable goal.

Instructions can take any form, but their purpose is always to clarify the desired final output. You can use a direct question, a specific command, or even an image request. Regardless of what your needs are, the key requirement is absolute clarity and directness in defining the desired action. Vague instructions will only lead to vague, unhelpful results.

In the task statement, you must clearly define the depth, subject, and necessary boundaries of the action itself. For instance, the example below dictates what needs to be done, the scope (7 days), and the required content areas (attractions, cuisine, culture):

Provide a 7-day itinerary highlighting must-see attractions, local cuisine recommendations, and cultural experiences for a trip to Japan. 

The task statement is where the persona and context you built earlier finally come together. The instruction uses the persona's style and the context's facts to create a unique and tailored response.

Constraints

After the model knows what to do (the task statement), we must tell it how to do it. Constraints are the specific rules and boundaries you place on the model's final output. They act like a necessary safety rail, ensuring the response gets delivered in the exact form you need for your application. These limitations stop the LLM from going off-script and prevent the use of unusable formats, saving us a lot of time later on.

Constraints most often dictate the required structure of the output. If you require the data to be easily used by another computer system, you can enforce a particular output format. This means asking for structured outputs like JSON, a clear numbered list, or a Markdown table. Without this explicit rule, the model will likely write a long narrative paragraph instead of the structured data you require.

Other types of constraints manage the actual content and style of the response. This includes setting a maximum word count, perhaps "no more than 200 words," or limiting the output to three bullet points:

Use bullet points for each day, keep each bullet under 20 words, and ensure the itinerary is realistic for a one-week trip.

The task statement tells the model what to create, but constraints clearly define the acceptable limits of a successful output. A prompt is ultimately incomplete if it lacks these clearly defined constraints, as the final result will be highly unpredictable. By adding these rules, you make the model's output very reliable, which is necessary for any form of automation or integrated application we build.

Conclusion

We've broken down the structure of an effective prompt, moving far beyond a simple question. A great prompt is a set of carefully assembled components. By clearly defining the persona, you give the model its voice and expertise. The context grounds it in relevant facts, external data, or real-time details—making the answer precise and accurate.

The task statement then provides a clear command, defining the exact action and scope the model must follow. Finally, constraints act as boundaries for the output, enforcing the required structure and content style. Understanding this anatomy will help you transform your LLM interactions and achieve high-quality outputs every time.

How did you like the theory?
Report a typo