Computer scienceProgramming languagesKotlinAdditional instrumentsDatabase and Storage

Cache

7 minutes read

When we develop services or applications that are interconnected, it's important that we optimize their efficiency in the use and consumption of data and information. The use of cache helps us speed up our operations and improve the development of software products. In this topic, we will learn to manage the cache with a multiplatform and reactive library, Cache4k, to improve the development of our applications with Kotlin.

Cache benefits

Often, repeatedly accessing files or databases can be a costly process, especially if we always consult the same data. The use of caching systems helps us save time, as it allows us to have an intermediate memory for such a query.

Here's a simple example: let's say you have an e-commerce website with a feature that recommends products to users based on their browsing history. Calculating these recommendations might involve complex database queries and algorithms and could take a significant amount of time. By using a cache, you could store the recommendations for each user after those're calculated. Then, the next time the same user visits the site, you could retrieve the recommendations from the cache instead of calculating them all over again. This can make the website faster and more responsive for users, as well as reduce the load on your servers.

When we work with an in-memory cache (which can sometimes be a simple map), we can set its refresh to be based on usage, the last accessed item, frequency of access, etc. Each problem may require a specific cache solution or use.

Install and configure Cache4k

Cache4k is an in-memory cache for Kotlin Multiplatform. It offers a straightforward in-memory key-value caching solution for Kotlin Multiplatform, featuring capabilities for evictions based on time (expiration) and size.

To use Cache4k, we need to install the right dependency in build.gradle.kts. In the following code, x.y.z is the version of the dependency:

dependencies {
    // Kotlin coroutine dependency, if you want to use async mode
    implementation("org.jetbrains.kotlinx:kotlinx-coroutines-core:x.y.z")
    
    // Cache4K 
    implementation("io.github.reactivecircus.cache4k:cache4k:x.y.z")
    testImplementation(kotlin("test"))
}

Create a cache

When creating a cache, we can have several modes of operation. By default, a cache can hold an infinite number of entries that do not expire. However, it's possible to configure a cache to support expirations based on time and evictions based on size. We need a key to identify an object and the value to store in the cache.

  • Default: infinite values and infinite time.
  • Time-based expiration: the expiration time can be defined for each entry in the cache.
    1. Expire after access: it is used to establish the maximum duration an entry can remain in the cache since its last access (commonly referred to as time-to-idle), where "access" includes reading the cache, adding a new cache entry, or updating an existing entry with a new one. In this cache, an entry will be evicted if it hasn't been read or updated in the specified time after being written into the cache.
    2. Expire after write: it is used to set the maximum lifespan of an entry in the cache since its last write operation (also known as time-to-live), where "write" means adding a new cache entry or replacing an existing entry. An entry in this cache will be evicted if it hasn't been updated, for example, in 1 hour after being written into the cache. Please note that cache entries do not get removed immediately upon expiration. The expirations are verified during each interaction with the cache.
  • Setting size-based eviction: it is used to specify the maximum number of entries that can be stored in the cache. For example, once the cache holds more than 100 entries, the least recently used entry will be evicted. Here, "used" refers to reading the cache, adding a new cache entry, or replacing an existing entry.

Also, we can combine all of the above to adjust to our needs.

// Create a cache with a maximum size
val cacheInfinite = Cache.Builder<Int, Person>()
    .build()

// Entries expire 1 hour after access
val cacheWithExpiryAccess = Cache.Builder<Int, Person>()
    .expireAfterAccess(1.hours)
    .build()

// Entries expire 1 hour after write
val cacheWithExpiryWrite = Cache.Builder<Int, Person>()
    .expireAfterWrite(1.hours)
    .build()

// Size-based eviction
val cacheWithSizeBasedEviction = Cache.Builder<Int, Person>()
    .maximumCacheSize(100)
    .build()

// Size- and time-based evictions together
val cacheWithSizeAndTimeBasedEviction = Cache.Builder<Int, Person>()
    .maximumCacheSize(100)
    .expireAfterWrite(1.hours)
    .build()

Writing and reading cache entries

We can use put to start writing entries to the cache. The get method is used to read a cache entry by key. It will return a null value if the key does not exist. Additionally, put is used to overwrite an existing cache entry (if it does not exist, it will be added):

val cache = Cache.Builder<Int, Person>().build()

// write to cache
cache.put(1, Person(1, "John", 20))
cache.put(2, Person(2, "Jane", 30))
cache.put(3, Person(3, "Mary", 40))

// read from cache
cache.get(1)?.let { println(it) } ?: run { println("Not found") }
// Person(id=1, name=John, age=20)

// Overwrite existing entry
cache.put(1, Person(1, "John", 21))
cache.get(1)?.let { println(it) } ?: run { println("Not found") }
// Person(id=1, name=John, age=21)

Besides, we can obtain all entries as a map with the asMap method:

val entries = cache.asMap()
entries.forEach { (key, value) -> println("$key -> $value") }
// 1 -> Person(id=1, name=John, age=21)
// 2 -> Person(id=2, name=Jane, age=30)
// 3 -> Person(id=3, name=Mary, age=40)

We can also use a suspending function with get in case we want to retrieve a key that doesn't exist, allowing us to execute an asynchronous task to recover it, for instance, from a database, file, or web service.

// using a suspend function to retrieve async data
cache.get(4) { retrieveAsyncPerson(4) }.also { println(it) }
// Person(id=4, name=Async person, age=20)

Deleting cache entries

You also have an option to explicitly remove cache entries. Remove a specific cache entry using its key and the invalidate method. We can invalidate all entries with invalidateAll:

// Invalidate by key
cache.invalidate(1)
cache.get(1)?.let { println(it) } ?: run { println("Not found") }
// Not found

Watch for changes

You have an option to establish an event listener as a lambda, enabling you to identify or respond to any modifications or changes in the cache (Created, Updated, Evicted, Expired, or Removed).

val reactiveCache = Cache.Builder<Int, Person>()
    .eventListener { event ->
        println("onEvent: $event")
    }
    .build()

reactiveCache.put(1, Person(1, "John", 20))
reactiveCache.put(2, Person(2, "Jane", 30))
reactiveCache.put(1, Person(3, "Mary", 40))
reactiveCache.invalidate(1)

// onEvent: Created(key=1, value=Person(id=1, name=John, age=20))
// onEvent: Created(key=2, value=Person(id=2, name=Jane, age=30))
// onEvent: Updated(key=1, oldValue=Person(id=1, name=John, age=20), newValue=Person(id=3, name=Mary, age=40))
// onEvent: Removed(key=1, value=Person(id=3, name=Mary, age=40))
// onEvent: Removed(key=2, value=Person(id=2, name=Jane, age=30))

Conclusion

In this topic, we have learned how to use cache with Kotlin and Cache4k, and now you can improve your apps with more efficient code:

  • Speed: In-memory caches are extremely fast because they store data in RAM, and accessing data from RAM is much faster than accessing it from disk or over a network. This speed can greatly improve the performance of applications, particularly those that need to access the same data frequently.
  • Reduced load on backend systems: By storing frequently accessed data in the cache, you can significantly reduce the load on your backend systems, such as databases or web services. This can help to increase the overall scalability of your application, as your backend systems won't need to handle as many requests.
  • Improved user experience: By speeding up data access times, in-memory caches can make applications more responsive, leading to a better user experience. This is particularly important for web applications, where slow load times can lead to user frustration and abandonment.

It's time to put your knowledge to the test with a few tasks. Are you ready?

9 learners liked this piece of theory. 1 didn't like it. What about you?
Report a typo