Generative AIPrompt engineeringPrompting techniques

Problem decomposition prompting

10 minutes read

When faced with a complex problem, our instinct is often to rush toward a solution. However, trying to solve a multifaceted issue in a single breath often leads to confusion or errors. This is true for humans, and it is especially true for Large Language Models (LLMs).

In this topic, we will explore problem decomposition—a set of prompting strategies designed to break down complicated tasks into manageable steps. While most of these techniques build on the foundation of Chain-of-Thought (CoT) prompting (asking the model to "think step-by-step"), decomposition goes further by imposing specific structures on how that thinking is organized. It turns a vague request into a rigorous process.

Step back and Chain-of-Logic

Sometimes, a model gets too bogged down in specific details and fails to apply the correct general principles. Step-back prompting asks the model to pause and identify the high-level concepts or scientific principles relevant to the problem before attempting to solve the specific instance. By retrieving the general rule first, the model grounds its reasoning in correct theory rather than hallucinated "common sense."

The reconstruction happens when the model returns to the specific problem. Once the abstract principle is established, the model uses it as a lens to re-evaluate the details of the user's request. It maps the universal rule onto the specific variables of the context, ensuring the final answer is a direct logical descendant of that core principle rather than a guess.

Step-back prompting workflow

This approach pairs naturally with Chain-of-Logic. While Step-back retrieves the correct theory (the major premise), Chain-of-Logic ensures the validity of the argument structure itself (e.g., "If rule A applies, and situation B matches the criteria for rule A, then conclusion C must follow"). In high-stakes domains, such as formal verification of smart contracts or legal analysis, you might combine these by asking the model to state its premises (Step-back) and then explicitly verify that the conclusion follows logically.

Consider a scenario where a physics student is confused about motion. If asked directly, a model might guess based on intuition. A Step-back approach forces it to ground the answer in Newton's laws first.

Persona: You are a senior physics professor explaining concepts to undergraduates.
Context: A student asks: "If I throw a ball strictly horizontally from a 100m cliff, and drop a rock from the same height at the exact same moment, which hits the ground first?"
Task statement: First, step back and identify the fundamental physics principle governing vertical and horizontal motion independence. Explain this principle in isolation. Then, apply this principle to answer the student's specific question.
Constraints: The "Principle" section must be distinct from the "Application" section.

However, this technique has limitations. It is less effective for tasks that are purely creative or subjective, where there are no "universal truths" to retrieve. If applied to writing a poem or brainstorming a marketing slogan, stepping back to general principles may result in a generic or clichéd response rather than a tailored solution.

Plan-and-Solve and Skeleton-of-Thought

Once you move beyond simple principles, you encounter tasks that require a sequence of actions. Plan-and-Solve (PaS) prompting separates the reasoning process into two distinct phases: devising a plan and then executing it. An example of this is in the development of AI agents. For instance, before writing a single line of code, an agent like Claude Code generates a list of "TODOs" that are then executed one at a time.

A variation of this for content generation is the Skeleton-of-Thought technique. Instead of planning actions, the model plans the structure of a response (the skeleton) before fleshing out the details. This is excellent for drafting long-form documentation or articles, ensuring the final output is coherent.

To illustrate Plan-and-Solve in a technical workflow, imagine an autonomous agent tasked with fixing a bug.

Persona: You are a senior software engineer acting as an autonomous debugging agent.
Context: A user reports that the payment processing module throws a TimeoutError when processing transactions over $10,000. You have access to the codebase and the test suite.
Task statement: Debug this issue. Do not rush to a fix. First, create a numbered plan that includes reproducing the error, analyzing the logs, proposing a fix, running the test suite, and confirming resolution. Then, execute the plan step-by-step.
Constraints: Output the "Plan" first. Then, output the "Execution" where you simulate the findings and actions for each step.

The main drawback here is rigidity. If the initial plan is flawed—for example, if the model forgets a crucial step like "back up the database"—the execution phase will faithfully follow that flawed plan to failure. It requires the model to have a strong understanding of the domain before it starts working.

Least-to-Most

While Plan-and-Solve works well for linear tasks, some problems are hierarchical. Least-to-Most (LtM) prompting involves breaking a specific problem down into a series of simpler sub-problems where the answer to the first part is required to answer the second.

The model solves the easiest sub-problem first and uses that answer to build the next, slightly harder step. This is widely used in educational AI tutors. To teach a student calculus, the model first verifies that the student understands the algebra prerequisites. If the model tries to solve the whole equation at once, it might hallucinate; if it solves the sub-components first, it retains accuracy.

Persona: You are a helpful math tutor for high school students.
Context: The student needs to solve a complex word problem: "A farmer has a rectangular field where the length is twice the width. If he expands the width by 20 meters and the length by 10 meters, the area increases by 3000 square meters. What are the original dimensions?"
Task statement: Solve this by breaking it down. First, define the variables for the original dimensions. Second, write the equation for the original area. Third, express the new dimensions and new area. Finally, solve the equation.
Constraints: Show your work for each sub-question clearly before moving to the next.

This approach requires the problem to be decomposable into a clear dependency chain. If the sub-problems are interdependent (circular dependencies), this linear decomposition breaks down. It also increases the token count significantly, as the context window fills up with intermediate steps.

Tree-of-Thoughts

For highly complex or creative tasks, a linear path might not be enough. Tree-of-Thoughts (ToT) prompts the model to generate multiple distinct lines of reasoning (branches), evaluate them, and then expand on the most promising one.

Tree-of-Thoughts prompting workflow

This mirrors how human experts brainstorm. In medical diagnostics, a model might generate three different "differential diagnoses" based on symptoms, evaluate the likelihood of each, and then pursue the most probable one. Here’s another example:

Persona: You are a creative writing coach and plot consultant.
Context: We are writing a mystery novel. The victim is found in a locked room with no windows. The only key is in the victim's pocket.
Task statement: Generate three distinct theories for how the murderer escaped. Evaluate the plausibility of each theory based on classic "locked room" tropes, then select the most innovative one to develop into a scene outline.
Constraints: Use a structured format: "Theory 1," "Theory 2," "Theory 3," followed by "Critique," and finally "Selected Path."

The trade-off for this depth is cost and speed. Tree-of-Thoughts is computationally expensive because the model has to generate and evaluate multiple branches. It is often overkill for simple queries where a single direct answer would suffice.

Program of Thoughts

For mathematical, data-heavy, or algorithmic tasks, LLMs can struggle with precise mental arithmetic or mass file operations. The Program of Thoughts (PoT) technique instructs the model to act as a programmer. Instead of calculating the answer in natural language, the model generates executable code (usually Python) to solve the problem.

You'll see this in tools like Junie from JetBrains. Imagine a scenario where the agent is tasked with performing complex data manipulations, such as decoding multiple Base64-encoded files or normalizing date formats across a massive dataset. The model generates a Python script and executes it for the best results.

Persona: You are a data scientist specializing in e-commerce analytics.
Context: You are provided with a CSV file named sales_data.csv containing 100,000 rows of transaction data with columns: date, category, amount, and customer_id.
Task statement: We need to find the top 3 best-selling categories by total revenue for the Q4 holiday season (October to December). Write a Python script using the pandas library to load the data, filter for Q4, group by category, sum the revenue, and print the top 3 results.
Constraints: Do not simulate the data processing in text. Output only the Python code block that would calculate this if run.

The limitation here is strictly environmental. This approach relies entirely on the availability of an external code execution environment. If the environment lacks the necessary libraries or if the model generates code with syntax errors, the process fails.

Conclusion

Problem decomposition transforms an LLM from a simple text generator into a capable problem solver. In practice, you will rarely use just one of these in isolation. You might use Step-back to understand the requirements, Plan-and-Solve to outline the architecture, and Program of Thoughts to execute the heavy data migration. By mixing and matching these strategies, you can handle complex workflows with a high degree of reliability.

12 learners liked this piece of theory. 2 didn't like it. What about you?
Report a typo