Press "Enter" to skip to content

How to implement a LRU Cache in Java

Ajk 8

How to implement a LRU Cache? I got asked this question at a phone interview a long time ago. If you have rarely used and heard the term “cache” this question will come to you as quite a surprise. I am sure you have heard the term cache in CPUs or HDDs but to implement a cache? I remember when I first came across this question I was quite surprised myself and needed to think hard before I could understand the question.

Understanding a problem is the first step towards solving it. So the first thing you should do is ask yourself – what is a LRU cache? Well, LRU stands for Least Recently Used. Hence, a LRU Cache is a component that retrieves future requests faster by keeping them in a least recently used order and removing the least recently used item when a new one come up.

Some design specification to be mentioned for an LRU cache

  • Cache hits need to be the fastest possible (meaning O(1))
  • We need to keep and update a list of items so that we can easily put things at the top of the cache and we can easily discard items at the bottom of the cache

So while we think about this design specification we can further specify the data structures needed.

  • To guarantee access in O(1) time in case of hits we need to keep all recently used items in a HashMap data structure
  • To guarantee a continuously updated list we need some kind of List in the form of a Doubly Linked List where we can easily pinpoint the head and tail of the list

Let’s take a look at what the whole picture with both data structures would look like:

LRU Cache
LRU Cache

Thankfully, Java has already a ton of data structures ready for use. For our current task indeed, I have found a hidden gem that will provide our task to be fairly trivial. This simple approach will be to use the already defined for us java.util.LinkedHashMap (documentation found at LinkedHashMap API) so let’s start with this.

How to implement a LRU cache – Java simple implementation

Good, but this answer is too specific to the language. Chances are in most other languages we will not have such an almost perfect class for our LRU cache. Instead we would have to create this data structure ourselves. Your interviewer will most likely want to know more too. So for this reason we will want to come up with a solution that proficiently uses both data structures mentioned at the beginning. Some kind of Map and some kind of Linked List.
Not only that, but to follow up and make your answer more crisp you might want to think about how parallel computing and will apply to our current solution. The topic at hand is called Concurrency and indeed even in real life, you will need to worry about how your implementation will fit in a multi-threaded application. The answer is astonishingly simple. Java provides you with all the help you need following the simple static method Collections.synchronizedMap method.
(check Collections API for more info)

How to implement a LRU cache – Java synchronizedMap example

Some food for thought on how this is going to pan out:

  • If an item exists in the cache – Cache Hit -> Remove the element from the Linked List and add it to the beginning
  • If an item does not exist in the cache – Cache Miss -> If cache is full / out of capacity, remove the oldest entry in the Linked List. Also add the new one to both Linked List and Map

Fortunately, once again we are given a bunch of help when it comes to dealing with concurrency because we can use some pre-defined data structures to implement our LRU Cache in Java. In fact if instead of using Java’s LinkedHashMap we search the library a bit more we will find ConcurrentHashMap and ConcurrentLinkedQueue which support concurrency and pretty much provide everything we need so that we do not need to synchronize actions by ourselves.

How to implement a LRU cache – Java in-depth implementation

Looks pretty good does it not? Now you can say you successfully know how to implement a LRU cache in Java and next time you will be asked in an interview you (and me) will absolutely ace the question with quite a few different approaches and in-depth answers.

Hope you guys enjoyed it… And I’ll see you guys next time ;D

The following two tabs change content below.
If you like one of my posts the best way to support is give it a thumbs up, comment, or share it on social media 🙂
  1. PP PP

    I like your code and is very good example for LRU cache. I have one question though, suppose on put(..) you are checking if hashmap contains key if yes then you remove the key from LinkedQueue and then you are checking if linkedqueue size is equal or greater then fixed size, if linkedQueue size is less than fixed size then you will ignore the code where you are deleting the key and value from hashmap and hence on the very next code you are inserting both key and value on hashMap, this use case will create more than 2 key, value into the hashMap. Please let me know if you think otherwise.

    • From Java’s HashMap API

      public V put(K key, V value)
      Associates the specified value with the specified key in this map. If the map previously contained a mapping for the key, the old value is replaced.

      So putting it again is simply going to replace the key.

      • PP PP

        Thanks that clear the doubt

  2. Nikhil Khatwani Nikhil Khatwani

    Hi, In order to correctly implement LRU list of items, “get” method should update the queue by moving the accessed item as most recently used item.

    public int get(int key) {
    Integer value = hashMap.get(key);
    if (value != null) {
    return value;

  3. TechUser2011 TechUser2011

    There are major problems with the implementation given here.

    1. In the get() method, you need to move the key’s item to the head of the queue. The code is completely missing this action.
    2. In the put() method, you call linkedQueue.remove(key). Because ConcurrentLinkedQueue is implemented as a linked list, this remove() call will be O(N) time to find the key. The put() call should be O(1) time.

Leave a Reply

Your email address will not be published. Required fields are marked *