Implementing LRU Cache














































Implementing LRU Cache



      This article revolves around the implementation of Least Recently Used Cache (LRU Cache) in C++. The article has been  broken down into 3 sections:

                                             1. What is a Cache

                                             2. How can it be implemented

                                             3. Source Code

 What is a Cache?

     A cache is a high speed memory that is used by the CPU to store the most recently used information for faster retrieval. The execution time of an instruction is basically the processing speed of the CPU. If the time taken for retrieval of information is higher than the processing speed then obviously the execution time will be higher than the processing speed. In Order to  speed up the processing, certain information is prefetched in the cache memory as the cache memory is much faster than main memory or secondary memory.

 How can it be implemented?

     In general, any algorithm can be executed in n number of ways but it is always wise to look at the most optimal solution with respect to space and time complexity. In simple words, time complexity is the amount of time taken to perform a certain function and space complexity is the amount of space required for the function regardless of the number of inputs.

     Inorder to build an LRU cache, we must have an efficient system with a very fast(O(1)) lookup, deletion and updation of the order. Here O(1) means that the program will always take constant time for an action regardless of the no of values.

     The best bet for this is a combination of hashtable and doubly linked list because if we look at hashtables, they have constant lookup and insertion but there is no functionality for looking at the recently queried value. If we look at doubly linked list, we can have a constant lookup, deletion and updation provided we have the address of the value which can be provided by the hash table.

     For the implementation, we can have the addresses and key value of each node in hashmap and the linked list will hold a value of keys. The linked list is designed such that the head is the place where the updation takes place and so least used will automatically be pushed to the tail.



Source code:
#include<iostream>

#include<unordered_map>
#include<list>
using namespace std;
class lru{
list<int> el;//Doubly linked list to store the elements
unordered_map<int, list<int>::iterator> ref;//hashtable for storing the reference address
int n;//hashtable for storing the refernce address
public:
/*Constructor to initialize the capacity of cache*/
lru(
int cap){
n=cap;
}
void insert(int);
void print();
};
/*add a value to the list if new or move the value to the front if already present*/
void lru::insert(int val){
if(ref.find(val)==ref.end()){// if the value is not found in the list
if(ref.size()==n){//if the cache size is equal to the capacity
int x=el.back();
el.pop_back();
//removing the last element from the queue
ref.erase(x);
//erasing the key value pair from the hash table
}
}
else{
el.erase(ref.find(val)->second);
//if present erasing it
}
el.push_front(val);
//adding the value to the front
ref[val]=el.begin();
//storing the address in the hashtable
}
/*Iterating through the entire list and printing the elements*/
void lru::print(){
for(auto it=el.begin();it!=el.end();it++)
cout<<*it<<" ";
cout<<endl;
}
int main(){
lru cache(3); //to specify the capacity
/*Refer explanation*/
cache.insert(
1);//1
cache.insert(
2);//2
cache.insert(
3);//3
cache.insert(
2);//4
cache.insert(
4);//5
cache.insert(
2);//6
cache.insert(
5);//7
cache.print();
}

OUTPUT:
5 2 4 
Explanation:

1. 1
2. 2, 1
3. 3, 2, 1
4. 2, 3, 1 (Position of 2 is changed)
5. 4, 2, 3 (least recently used 1 is deleted and 4 is inserted)
6. 2, 4, 3 (Position of 2 is changed)
7. 5, 2, 4 (least recently used 3 is deleted and 5 is inserted)


Comments