The multi-armed bandit problem is a classic RL problem that models the exploration-exploitation trade-off in sequential decision-making scenarios. It involves an agent trying to maximize cumulative rewards by selecting from a set of actions (arms) with unknown reward distributions, balancing exploiting the currently known best action and exploring to find potentially better actions.
In this topic, we will look at the multi-armed bandit problem and consider the main approaches.
The problem setup
The multi-armed bandit problem is a sequential decision-making problem that models a scenario where an agent (or algorithm) must choose among a set of actions (or arms) over a sequence of time steps. Each action or arm has an associated reward distribution, which is initially unknown to the agent. The goal of the agent is to maximize the cumulative reward obtained over the time horizon by adaptively choosing the arms based on the rewards observed so far.
Formally, let's denote the set of arms as . For each arm , there is an associated reward distribution with an unknown expected value or mean reward denoted by . At each time step t, the agent selects an arm and receives a reward drawn from the reward distribution of the chosen arm.
The agent's objective is to maximize the expected cumulative reward over a time horizon of steps, which can be expressed as:
where the expectation is taken over the reward distributions and the agent's policy for selecting arms.
A key challenge in the multi-armed bandit problem is the exploration-exploitation trade-off: the agent must balance exploiting the currently known best arm (based on observed rewards) and exploring other arms to gather more information about their reward distributions, as one of the unexplored arms may have a higher expected reward.
The performance of a multi-armed bandit algorithm is often measured by its cumulative regret, which is the expected difference between the rewards obtained by an optimal policy (that always selects the best arm) and the rewards obtained by the algorithm. The goal is to minimize the cumulative regret over the time horizon.
where is the expected reward of the optimal arm (the arm with the highest mean reward), and is the expected reward of the arm chosen by the algorithm at time step .
Various algorithms have been developed to address the multi-armed bandit problem, such as Epsilon-Greedy, Upper Confidence Bound (UCB), and Thompson Sampling, each with its own approach to balancing exploration and exploitation, which we will look at in the upcoming sections.
Epsilon-greedy
The Epsilon-Greedy algorithm is one of the simplest and most widely used algorithms for the multi-armed bandit problem. It follows a straightforward strategy to balance exploration and exploitation: with a small probability (epsilon), the algorithm explores by selecting a random arm, and with probability , it exploits by selecting the arm with the highest estimated expected reward based on the observed rewards so far.
Let represent the estimated expected reward for arm a based on the observed rewards up to the current time step . The Epsilon-Greedy algorithm can be defined as follows:
Initialize for all arms ;
For each time step :
a) With probability , select a random arm from (exploration),
b) With probability , select the arm (exploitation);Observe the reward for the selected arm ;
Update the estimated expected reward based on the observed reward .
The estimated expected reward is typically computed as the average of the rewards observed for arm a up to the current time step. If is the number of times arm has been selected, and the observed rewards for arm are , then:
The key parameter in the Epsilon-Greedy algorithm is , which controls the exploration-exploitation trade-off. A higher value of promotes more exploration, as the algorithm will select random arms more frequently. Conversely, a lower value of promotes more exploitation, as the algorithm will tend to select the arm with the highest estimated expected reward more often.
A common practice is to decay the exploration rate ε over time, starting with a higher value to promote initial exploration and gradually decreasing it to favor exploitation as more information is gathered about the reward distributions of the arms.
While the Epsilon-Greedy algorithm is simple and easy to implement, its performance can be sensitive to the choice of the exploration rate . Setting too low can lead to insufficient exploration and potentially converging to a suboptimal arm, while setting too high can result in excessive exploration and slow convergence to the optimal arm.
Upper Confidence Bound (UCB)
The Upper Confidence Bound (UCB) algorithm is a more sophisticated approach to balancing exploration and exploitation in the multi-armed bandit problem. Instead of using a fixed exploration rate like in Epsilon-Greedy, UCB constructs an upper confidence bound on the expected reward of each arm and selects the arm with the highest upper bound at each time step.
The key idea behind UCB is to adjust the exploration-exploitation trade-off based on the number of times each arm has been played so far. Arms that have been played fewer times will have a higher degree of uncertainty in their estimated rewards, and UCB will favor exploring those arms more by assigning them a higher upper confidence bound.
For each arm , let be a random variable representing the reward distribution of arm , with an unknown expected value . At time step , the UCB algorithm selects the arm that maximizes the following upper confidence bound:
where:
is the estimated expected reward for arm , typically computed as the average of observed rewards for that arm.
is the number of times arm has been played up to time step .
is an exploration constant that determines the degree of exploration.
is the exploration term, which decreases as arm is played more times.
The first term, , represents the exploitation component, favoring arms with higher estimated rewards. The second term, , is the exploration component, which is higher for arms that have been played fewer times (smaller ).
The exploration constant controls the degree of exploration: a larger value of promotes more exploration, while a smaller value favors exploitation more. A common choice for is , which provides theoretical guarantees on the regret bound for UCB.
After selecting arm , the UCB algorithm observes the reward and updates the estimated expected reward based on the new observation.
The UCB algorithm strikes a balance between exploration and exploitation by adaptively adjusting the exploration-exploitation trade-off based on the number of times each arm has been played. It has been shown to have strong theoretical guarantees on the cumulative regret and is widely used in practice for multi-armed bandit problems.
Thompson sampling
Unlike algorithms like Epsilon-Greedy and UCB that use heuristics to balance exploration and exploitation, Thompson Sampling treats the problem in a probabilistic framework and naturally balances exploration and exploitation through its sampling process.
In Thompson Sampling, for each arm , a prior distribution is maintained over the possible values of the expected reward . This prior distribution represents the initial belief about the reward distribution of the arm before any observations are made.
At each time step , Thompson Sampling proceeds as follows:
For each arm , sample a reward value from the posterior distribution of , which is the updated distribution after incorporating the observed rewards for that arm so far.
Select the arm with the highest sampled reward value .
Observe the actual reward for the selected arm .
Update the posterior distribution of based on the new observed reward .
The key idea behind Thompson Sampling is that arms with higher uncertainty in their posterior distributions (due to fewer observations) are more likely to occasionally sample higher reward values, leading to their selection and further exploration. At the same time, arms with higher expected rewards based on past observations are more likely to sample higher reward values more consistently, leading to their exploitation.
A common choice for the prior and posterior distributions in Thompson Sampling is the beta distribution, which is a conjugate prior for the Bernoulli distribution (commonly used to model binary rewards). If the rewards for arm are binary, and the prior is a distribution, then after observing rewards with successes (rewards of 1), the posterior distribution of is:
The sampled reward value is then drawn from this updated posterior distribution.
Thompson Sampling has been shown to perform well in practice and has strong theoretical guarantees on the cumulative regret. It naturally balances exploration and exploitation without the need for tuning parameters like the exploration rate in Epsilon-Greedy or the exploration constant in UCB.
Conclusion
As a result, you are now familiar with the basic setting of the multi-armed bandit problem and the theoretical foundations for Epsilon-Greedy, Upper confidence bound, and Thompson sampling algorithms.