Generative AIPrompt engineeringPrompting techniques

Problem decomposition prompting (LtM, PaS, PoTh)

10 minutes read

Have you ever faced a complicated problem that seemed too confusing to tackle? They can be difficult and overwhelming to solve at once, and doing that might even give you the wrong answers. However, a valuable method known as problem decomposition can be used in this situation.

Problem decomposition prompting

Problem decomposition is the process of breaking down big and complex tasks into smaller and manageable sub-tasks. Here is a real-life example of problem decomposition: If you want to build a house, you must consider many factors, such as location, the number of floors, the number of rooms, the total area, and so on. Decomposing the problem will help to gain a an understanding of the steps required to construct a house.

In LLMs, problem decomposition prompting is important when the input prompt is complex, which could be challenging for LLMs to interpret the problem and generate the output. The next section discusses the three methods of problem decomposition prompting, all of which are the extensions of Chain-of-Thought prompting.

Least-to-Most prompting (LtM)

The Least-to-Most prompting is a manner where we gradually increase the complexity of our questions as we continue prompting. To understand this prompting process, consider building a weather forecasting application. The making of this application requires various sectors that have to be considered. To solve this problem, divide the task into the least complex, moderately complex, and most complex. After dividing, make the sub-task of least complex, moderately complex, and most complex:

Least complex task: Retrieve the current weather data for a specific location using an API.

Sub-task 1: Research available weather APIs.
Sub-task 2: Write a function to send a request to the API.
Sub-task 3: Parse the returned JSON data to extract relevant weather information.

AI Advisor's avatar
Go ahead and try sending a question. You can try different models.
Question: Could you retrieve the current weather data for a specific location using an API. Let's solve the above question by answering the following questions: 1. Could you list down some available weather APIs? 2. Could you write a function to send a request to the API? 3. Could you parse the returned JSON data to extract relevant weather information?

Moderately complex task: Implement caching to store weather data and reduce API calls.

Sub-task 1: Design a caching mechanism to store the latest weather data.
Sub-task 2: Implement a time-based invalidation strategy for the cache.
Sub-task 3: Update the weather retrieval function to check the cache before making an API

AI Advisor's avatar
Go ahead and try sending a question. You can try different models.
Question: How can caching be implemented to store weather data and reduce API calls? Let's solve the above question by answering the following questions: 1. How can I design a caching mechanism to store the latest weather data? 2. How can I implement a time-based invalidation strategy for the cache? 3. How can I update the weather retrieval function to check the cache before making an API?

Most Complex Task: Create a user interface that allows users to get weather forecasts for the week ahead and visualize data trends.

Sub-task 1: Design a UI layout for displaying the weather forecast.
Sub-task 2: Implement UI components for daily weather summaries.
Sub-task 3: Develop interactive charts for visualizing temperature trends over time.

AI Advisor's avatar
Go ahead and try sending a question. You can try different models.
Question: How can a user interface be created to enable users to access weather forecasts for the week ahead and visualize data trends? Let's solve the above question by answering the following questions: 1. How can I design a UI layout for displaying the weather forecast? 2. How can I implement UI components for daily weather summaries? 3. How can I develop interactive charts for visualizing temperature trends over time?

By focusing on simpler components first, the model can build a deeper understanding before moving to more complex aspects, and solving easier subproblems correctly increases the likelihood of correctly solving the overall problem. However, this approach requires the user to explicitly provide a clear breakdown of the task from simpler to more complex components.

Plan-And-Solve prompting (PaS)

In this method, the planning is done to break down the complex problem into sub-tasks then prompt some questions to execute the plan. The planning process of dividing the complex problem helps to gain the clarity of each plan in depth, which assists in generating accurate results. This approach is important because there is a high chance that LLMs may misinterpret complex problems.

Imagine you're tasked with creating a shopping cart for an E-commerce platform. You may make the following plans to complete the task:

1. Identify the requirements for the shopping cart, such as adding items, removing items, updating quantities, and calculating totals.
2. Design the data structures required to store cart items and their quantities.
3. Establish the user workflows for interacting with the shopping cart, such as adding items to the cart and proceeding to checkout.

After making the plan, you can prompt LLM to generate the answer according to the questions used to execute the plan.

AI Advisor's avatar
Go ahead and try sending a question. You can try different models.
Question: How to develop a shopping cart for E-commerce? Here is the plan to solve the above problem: 1. Identify the requirements for the shopping cart, such as adding items, removing items, updating quantities, and calculating totals. 2. Design the data structures required to store cart items and their quantities. 3. Establish the user workflows for interacting with the shopping cart, such as adding items to the cart and proceeding to checkout. Let's execute the plans by following the below-mentioned questions: 1. How can I implement the data model for the shopping cart, including classes or structures for items and the cart itself? 2. Can you create functions/methods for adding, removing, and updating items in the cart. 3. What is the logic to calculate the cart's total price, including discounts and taxes? 4. How can I develop the front-end components that allow users to interact with their shopping cart and integrate them with the back-end logic?

While this method is useful for generating correct output, the accuracy of the output depends on the initial planning stage. If the problem is not decomposed properly, then it can lead to the generation of incorrect results. However, it's important to note that decomposing the problem may not always be easy.

Program-of-Thought prompting (PoTh)

Program-of-Thoughts is a prompting technique that builds on Chain-of-Thought (CoT) prompting but with a key difference: instead of having the language model perform both reasoning AND computation, PoT separates these tasks. Instead of having the language model perform both tasks internally, PoT prompts the model to generate natural language reasoning alongside executable code, which is then run by an external interpreter to perform computations.

The model begins by decomposing the problem into smaller subproblems and uses natural language to reason about the steps needed to solve each one. For computations or operations that are better handled algorithmically, the model generates corresponding code (e.g., Python code) designed to perform specific calculations or data manipulations required at that step. The generated code is then executed by an external interpreter, and the results of the code execution are used in subsequent reasoning steps. This iterative process of reasoning, code generation, code execution, and result integration continues until the final solution is reached.

PoT relies on the availability of an interpreter to execute the generated code, and that requires a sandboxed environment (such that malicious code is not being run by accident) and also depends on the LLM's quality of code generation, but overall, tends to increase the accuracy of the outputs.

Conclusion

LLMs can be guided through problem-solving activities in an organized manner by using prompting approaches.

  • In the Least-to-Most prompting technique, you provide the prompts from the least complex to the most complex ones.

  • Plan-and-Solve prompting divides the problem-solving process into two distinct phases: planning (the model generates an outline or plan detailing the steps required to solve the problem), and solving (the model follows the outlined plan to work through the problem step by step, arriving at the final solution).

  • Program-of-Thoughts (PoT) is a prompting technique where language models generate both reasoning text and executable code, with the actual computation handled by a program interpreter rather than the language model itself.

11 learners liked this piece of theory. 2 didn't like it. What about you?
Report a typo