7 minutes read

In this topic, you will learn what an optimization problem is and how you can solve it. This skill should prove very useful because optimization problems are common in everyday life and many areas of business. We will begin by looking at real-world optimization problems, then become familiar with optimization vocabulary, before working our way through a mathematical example.

Real-world optimization problems

You have almost certainly encountered optimization problems before. When you're planning your budget for the following month or choosing between products that you want to buy, you are trying to pick the best solution from a large set of possibilities.

As an example, let's say you want to buy a smartphone. There are many available on the market, but you need yours to take amazing pictures and have the longest possible battery life. You also have a limited amount of money to spend. So, you are aiming to find the right balance between features and cost. This is an optimization problem!

People and businesses also try to solve optimization problems on a much larger scale. Most frequently, you will encounter them in fields such as:

  • Machine learning

  • Stock market and finance

  • Engineering

  • Physics

  • Operations research

  • Linear programming

Optimization vocabulary

The best way to start learning the vocabulary associated with optimization problems is via a more detailed real-world example.

Imagine that a soccer coach is planning some practice for their team. The main priority is to maximize the distance the players can run, so this will become the objective function (more on that later). The trainer can make the team spend time running, practicing defending their goal, or practicing attacking their opponent's goal. The amount of time spent on each task is a variable. It is also important to note the fact that there is a limited amount of time until the next game. In addition, if the coach completely sacrifices defending the goal, the team might well be able to run for longer without getting tired. However, their ability to protect their goal is unlikely to be satisfactory. These are constraints.

To summarize, the variables influence the objective function, and the constraints place limits on the domain of the variables.

Now, let's define what an objective function and constraints are outside of this particular example.

To solve any optimization problem, we need a function that will describe the objective that we want to achieve — this is the objective function (also sometimes called a loss function). To solve the problem, we want to minimize (or maximize) this function.

The tricky part is usually choosing a good objective function that contains all the relevant variables. For example, in machine learning, people quite often use the mean square error function MSE=iN(yiYi)2N\text{MSE}= {\sum_i^N (y_i - Y_i)^2\over N}, which shows the mean square difference between predictions and observation.

Constraints are the conditions that limit our choice of solutions to a problem. Usually, they are represented by a multivariable function or even a collection of functions. Sometimes you will find problems without any constraints, and these are described as unconstrained.

So, is there a general way to write an optimization problem?

The objective function can be linear or nonlinear, variables can be discrete or continuous, and constraints can be equalities or even inequalities. We can still write a general optimization problem form, though.

The general form of an optimization problem looks like this:

Minimize (or maximize) the objective function f(x,y,...)f(x,y,...) subject to equality constraints:hi(x,y,...)=0,i=1,2,...,ph_i(x,y,...) = 0, i = 1,2,...,pNext, take the same approach for inequality constraints:gj(x,y,..)0,j=1,2,...,mg_j(x,y,..) \leq 0, j = 1,2,...,m

If mm and pp are zeros, then there are no constraints.

The variables are xx and yy, and sometimes the objective function can even be applied to vectors. To minimize or maximize the objective function, you need to compute its first derivative and set it to zero. Then solve the equation (or multiple equations!) that you obtain, and you will get the optimal values for the variables.

A mathematical example

Below is a simple example of an optimization problem.

Find two positive numbers whose product is 25 and whose sum is a minimum.

Here we have an optimization problem with an equality constraint. So, let's formulate it more mathematically.

We are given the following constraint:

xy=pxy=pIn our case p=25p = 25.

We also have the objective function that we want to minimize:s=x+ys = x +y To do so, we need to express it in terms of a single variable (xx or yy) and find the extremum of the function.

We now rewrite the constraint (since we were told that xx and yy are positive, we don't need to worry about either being 00):

y=25xy = \frac{25}{x}

We can get the objective function in terms of xx:

s=x+25x=x+25x1s = x + {25 \over x} = x + 25x^{-1}

Now, to find the minimum (or extremum in general), we need to find the first derivative and set it equal to 00:

s=1+25(x2)=0s' = 1 + 25(-x^{-2}) = 0

The equation can be rewritten this way:

25x2=1{25 \over x^2} = 1

From there, it follows that the following is true:

x=±25x = \pm\sqrt{25}

We need to keep xx positive, so x=5x = 5, and also use the constraint y=5y = 5.

This means the minimum sum that will give us the desired result is s=10s= 10.

Conclusion

In this topic, we have learnt that:

  • Optimization problems are common in everyday life and a wide variety of fields, such as programming, finance and engineering.

  • The standard form of an optimization problem is to minimize or maximize the objective function, subject to some constraining conditions.

  • This can be achieved by using the first derivative test.

55 learners liked this piece of theory. 5 didn't like it. What about you?
Report a typo