Up to now, you have been introduced to caching and some of the terms related to caching. You also got a basic understanding of caching and now you know its importance. Managing cache is a critical task, and you must be careful about how to implement it for efficient memory management.
You need to implement caching that optimizes data access time by retaining frequently used data in the cache, reducing memory access latency, or other policies and strategies. For this, let's uncover some fundamentals about cache replacement policies.
Cache replacement policy
So, what is the cache replacement policy?
Cache replacement policies, frequently called cache replacement algorithms are optimizing instructions or algorithms that a computer program can utilize in order to manage the stored cache information. Simply put, it is a principle that defines how older versions of the cache should be replaced by the new ones.
There are a lot of various types of cache replacement policies. This section describes the Random replacement policy, Queue-based policy, Recency-based policy, and Frequency-based policy.
Random replacement policy
Let’s suppose our request is as represented by Access Sequence, and let’s suppose you have a storage system that has four storage buffers so it can store only four frames. The red indicates that a new item has been stored in the frame buffer, whereas the blue represents the incoming item already in the buffer.
Initially, you can see that the input request is accepted and stored when the frame buffers are free. When the frame buffer is full, that is, in timer state 4, the new input request overrides any random frame. This is the random replacement policy.
So, to summarize, the random replacement policy or random replacement algorithm selects an item from the storage at random and discards or gets rid of it to make space for the new incoming information. In this process, the algorithm keeps no information about how often a cache is used or when it was placed into the buffer.
First in, first out
First in, first out (FIFO) is one of the queue-based policies. In this one, the cache system behaves like a queue, replacing the oldest request (or the one that came first) with a new one.
In the above sequence, you can see the working mechanism of the FIFO policy. When the replacement is required in the frame, that is, on timer state 4, the algorithm checks which of the buffer was allocated first. It then replaces the content of incoming requests in this first occupied block.
This algorithm keeps track of buffer access history. It evicts the block in the order they were added, without considering how often or how many times the data has been accessed before.
Least recently used
Least Recently Used(LRU) is a type of recency-based policy. A recency-based policy is a policy that takes into consideration when or how often data has been accessed.
In LRU policy, the algorithm replaces the least recently used item from the storage with the new incoming data, item, or request. This algorithm keeps track of the access history of the data. This typically uses data structures like a linked list or an array. When an item is accessed or used, it is moved to the front of the data structure, or some flags will be turned on to indicate the item was recently used.
Let’s consider the following example:
The above table shows that, in the 3rd state, 2 is reintroduced or used by the system. Due to this, the least recently used item becomes 3. At the 5th state, when the buffer is full and new data is being introduced, it replaces 3, the least recently used item. Similarly, from state 6 onwards, the algorithm replaces the item that was accessed the earliest.
So, if you want to create a system that guarantees the safety of the most viewed item in the cache storage over other items, you can use LRU policy.
Least frequently used
Least Frequently Used (LFU) is a type of simple frequency-based caching policy. The standard characteristics of this method involve the system keeping track of the number of times a block is referenced in memory. When the cache is full and requires more room, the system will discard the item with the lowest reference frequency. This works very similar to LRU, except that instead of storing the value of how recently a block was accessed, you store the value of how many times it was accessed. So, of course, while running an access sequence, you will replace a block that was used the fewest times from our cache.
Let’s consider the following example:
In the above table, at state 6, the frequency of the items are:
You can see that the item with the most frequency is 1 and 6, and the item with the least frequency is 3,5 and 4. Items 3 and 5 have already been discarded before state 6, so the least frequently used item is 4, with a frequency of 1. You discard it, and a new value, 2, is kept in its place. This is the working of a simple LFU policy. There is an algorithm that makes use of LFU and LRU called LFRU. In addition, other LFU derivations differ in how they keep track of the frequency.
So, if you want to create a system that guarantees the safety of the item occurring most in the cache storage over other items, you can use the LFU policy.
Conclusion
You have learned some of the simple caching policies to understand and implement. However, there are some complicated algorithm and efficient algorithms that are being used today. For additional information about caching policies, you can search for sites online and various blog articles. In this topic, you have understood the concepts of random replacement, FIFO, LFU, and LRU policies. These strategies can be implemented by using simple data structures as well. So, feel free to experiment around them.