Maximum productivity and efficiency are key aspects of web development. How can you improve the application's performance? Caching is generally the most effective way to bring your application to the next level. When rendering a template, you need to process data from a database, third-party APIs, or files and then apply the business logic before serving it up to an end-user.
Caching is essential for dynamic websites. Django comes with built-in options to save your dynamic pages and avoid processing them again the next time a client requests it. Django allows caching your entire site, an output of a specific view, or a fragment of your template. If you are looking to make your web applications even faster and more responsive, then the Django Caching Framework is what you need.
Django allows caching the following:
Local memory cache
Database cache
File system cache
Memcached
So let's start diving into this exciting world of optimizing and improving the performance of Django projects. Imagine we have a bookstore app that is growing steadily. You are initially building a simple bookstore app without a lot of traffic and want to improve performance. in such a situation, it will be enough for us to configure Local memory caching.
Local Memory Caching
By default, Django uses a local memory caching backend. This method stores cached data in the RAM where your Django application runs. It's quick and is an excellent choice for your local development or testing environment. The disadvantage of local memory caching is its inability to scale beyond a single server. It works best with a single Django instance.
In Django, you can configure local memory caching using the built-in backend django.core.cache.backends.locmem.LocMemCache. Add the code below to your project's settings.py file to do this:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.locmem.LocMemCache',
'LOCATION': 'bookstore_cache',
}
}The cache LOCATION denotes the memory stores. If you only have one locmem cache, the LOCATION variable is not necessary. But, if you have multiple local memory caches, name at least one of them to differentiate them.
Local memory caching is a simple and highly customizable method. It's useful for storing data in the server's memory to reduce database load from frequent queries, such as listing books.
Local caching is mainly for development and testing because the data in memory can get lost when the server restarts. In production environments, use more persistent methods to maintain cached data even after restarting the server.
Database Caching
Your store is growing with an increasing number of customers and books, and you wish to limit the number of database queries.
Database caching stores the cache results in your existing database. It is the simplest but not the best way to cache your Django app. It uses your app's existing database, thereby increasing the overall database load.
To configure database caching, add the following code to your project's settings.py file:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.db.DatabaseCache',
'LOCATION': 'bookstore_cache',
}
}Next, create the cache table my_table_name.
To create the cache table, execute the following command in your terminal or command prompt:
python manage.py createcachetableYou can use database caching to cache data such as book details, ratings, and reviews, thus preventing multiple database queries.
Database caching can optimize performance and reduce database loads, especially when running high-cost queries that rarely change. But if you need to store large amounts of data, or share a cache between different instances of an application, it might be beneficial to cache information outside the database, for example, in the file system.
File System Cache
File system caching stores cached files in a specified file/directory location. Add the following code to your Django project's settings.py file to set up a file system caching:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.filebased.FileBasedCache',
'LOCATION': '/path/to/bookstore/books/cache',
}
}File system cache benefits include easy setup, long-term data retention. It is great for storing static data or large object caches. It's no surprise that many developers prefer this method.
However, this method has its challenges. You might face performance issues with large amounts of data. Operating system restrictions could limit the number of files. In a distributed environment, sharing this cache needs additional solutions. Existing data management and security also need attention.
File system cache provides a reliable space to store data, but it may not be suitable for high-load scenarios. In such cases, Memcached can help.
Memcached
Memcached is the most efficient and fastest caching method. Many high-traffic sites rely on Memcached to reduce the number of queries sent to the database in Django caching.
Memcached is a daemon server that handles caching. It allocates memory for caching. Add the following code to your Django project's settings.py file to set up Memcached:
CACHES = {
'default': {
'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
'LOCATION': '127.0.0.1:11211',
}
}Here, LOCATION refers to the address and port where your Memcached server operates.
Next, install the Memcache dependency library:
pip install python-memcachedMemcached provides excellent performance and scalability, making it an excellent choice for caching data in large, high-load web applications.
Although Memcached offers potent capabilities for data caching and performance improvements, per-query caching may not be adequate in specific scenarios. For instance, if your bookstore application includes dynamic pages, frequently updated data, or high user personalization levels, site-wide caching becomes preferable.
Caching the Entire Site
You can implement caching at different levels in Django. You can cache the whole site or specific parts of your website:
Per-site cache
Per-view cache
Template fragment cache
The simplest and easiest way to work with Django cache is to cache your whole website. Add two middleware classes, Update and Fetch, to your project's settings.py file to cache your entire application:
MIDDLEWARE = [
'django.middleware.cache.UpdateCacheMiddleware', # UPDATE
'django.middleware.common.CommonMiddleware',
'django.middleware.cache.FetchFromCacheMiddleware', # FETCH
]The order is important, django.middleware.cache.UpdateCacheMiddleware should come before django.middleware.cache.FetchFromCacheMiddleware.
Next, add the following code to your project's settings.py file:
CACHE_MIDDLEWARE_ALIAS = 'default'
CACHE_MIDDLEWARE_SECONDS = '600'The CACHE_MIDDLEWARE_ALIAS variable sets the cache alias for storage, while the CACHE_MIDDLEWARE_SECONDS variable gives the number of seconds for caching a page.
Caching your whole app isn't optimal for dynamic high-traffic apps with a memory-based cache backend due to high memory consumption. Also, caching the whole site necessitates cache invalidation when site data changes. Furthermore, it can't cache data dependent on user sessions, cookies, or other parameters.
Per-View Cache
Rather than using an expensive caching level that wastes memory space, you can cache a specific view. It's an efficient way to incorporate caching in your Django app. For example, you could cache a welcome page unique to each user. Use the cache_page decorator that comes with Django either directly on the view function or the path within URLConf. It's straightforward to set up and use.
Add the following code to your app views.py file to cache your view:
from django.shortcuts import HttpResponse
from django.views.decorators.cache import cache_page
@cache_page(60 * 15)
def greetings(request, first_name, last_name):
message = "<h2>Welcome {} {} </h2>".format(first_name, last_name)
return HttpResponse(message)Next, map your views to your URLs in urls.py file:
from django.urls import path
from django.views.decorators.cache import cache_page
from my_app import views
urlpatterns = [
path('<str:first_name>/<str:last_name>', views.greetings, name='greetings'),
]To cache a view in your URL, use the cache_page(60 * 15) decorator. It takes the expiry time of the cache in seconds as its argument. In the above example, the result of your greetings function will be cached for 15 minutes (60 * 15 = 900 seconds).
As the view mapped to your URL takes parameters, each different request from end users will be cached separately. For example, a request to /user/James/Brown will be different from /user/Charles/Ugo.
You can also cache a view directly in your urls.py file. Remove the cache_page(60 * 15) decorator in your greetings view and add it to the URL mapped to your greetings view:
from django.urls import path
from django.views.decorators.cache import cache_page
from my_app import views
urlpatterns = [
path('<str:first_name>/<str:last_name>', cache_page(60 * 15)(views.full_name), name='fullname')
]The cache_page decorator specifies the cache expiration time in seconds.
You can use any method illustrated above — the cache_page decorators in views.py, or the urls.py file.
Per-view cache is useful when you just need to cache a view, but it isn't enough in certain cases. This is particularly true when a page has static parts that rarely change, and dynamic parts that need fresher data and individual cache management for optimal performance.
Template Fragment Cache
Template caching allows you to cache a complete template or a template fragment. The latter is preferable because caching an entire template many times is inefficient. For instance, suppose you want to cache our books' list:
{% extends "base.html" %}
{% block title %}Books{% endblock %}
{% block content %}
<h2>Explore Our Collection of Books</h2>
{% for book in books %}
><div>
<h2>{{ book.title }}</h2>
<a href="{% url 'bookstore:book_details' book.pk %}">Learn More</a>
</div>
{% endfor %}
{% endblock %}To cache your template, add {% load cache %} close to the top of your template.
You can cache template fragments with the {% cache %} template tag. It requires at least two arguments: cache expiration and the name of the cache fragment:
{% extends "base.html" %}
{% load cache %}
{% block title %}Books{% endblock %}
{% block content %}
<h2>Explore Our Collection of Books</h2>
{% cache 86400 book_list %}
{% for book in books %}
<div>
<h2>{{ book.title }}</h2>
<a href="{% url 'bookstore:book_details' book.pk %}">Learn More</a>
</div>
{% endfor %}
{% endcache %}
{% endblock %}Template fragment caching is useful when a page has static elements that rarely change, and dynamic elements requiring more current data. It lets you balance performance with data relevance by caching only those webpage parts needing caching. This technique becomes especially useful when creating complex, dynamic web applications.
Conclusion
Django has a convenient caching mechanism for your views. It enables you to:
Cache specific responses for your views for a specific time;
Cache templates and parts of them;
Cache the entire site.
To configure this mechanism, set it up in your Django project's settings in the CACHES section. You can select from different caching backends:
Local memory cache, ideal for quickly storing often-used data in-memory, ensuring fast access to crucial information;
File system cache, suitable for tasks like storing static assets, making it a great choice to speed up the delivery of images and other static content;
Database cache, which seamlessly works with user sessions. It offers data persistence and reliability;
Memcached, which can provide fast caching, particularly useful for frequently accessed data, in a high-traffic website, although it requires additional setup.
To implement caching in your code, you have several options:
Use a decorator to cache a specific view or function, making it faster and more efficient;
Use direct caching methods when you specifically need to cache certain data for later use, improving your code's efficiency;
Use template tags when working with templates and make sure you load the corresponding cache module when using them. This allows you to cache dynamic template content and enhance rendering speed.
Remember, sometimes the best way to solve your problem is to test it under load. Try out different settings yourself, then use the caching mechanism that performed well. This strategy often is simpler and more efficient than conducting extensive analyses and can lead to more effective results.