Computer scienceBackendFlaskApplication ArchitectureDeploying the application

Production-ready application

10 minutes read


Before the actual deployment for public use, there are things to consider: you need to remove all temporary things that you used for debugging, double check if the code is clean and readable, REST is respected, and you can troubleshoot if anything goes wrong.

This is the step before production-ready application, which consists of stabilizing an application state and ensuring its reliability.

Structure

Diagram of standard backend web application

First of all, remember that line that was the first thing you see running the application?
Something about the development server:

WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead

WSGI

That warning means you should not use the default built-in WSGI server – Web Server Gateway Interface, which is just the name of a standard for web servers written in python. Instead of a built-in development one, you should switch to the production WSGI server. That is because the built-in one is only worth local development and is not secure or reliable.

So you have a couple of options here(uvicorn, gunicorn, and so on). Their principle is the same: tell the server what application to run, set the number of workers, and port where to connect... but wait, first things first.
Let's take, for example, the most popular WSGI server – gunicorn.

What you need to do to run an application with gunicorn instead of flask development server is to install a gunicorn like any python module: pip install gunicorn call from the terminal.

gunicorn -w 4 myapp:app where the -w parameter is the number of workers – independent processes with dedicated memory space that will serve HTTP requests to your application, and the myapp:app is a path to your flask(or Django/FastAPI, the universality of WSGI kicks in), where myapp is the name of the module and app is the name of the application object.

Example for the simplest application:

# flask_app.py

from flask import Flask

app = Flask(__name__)


@app.route('/')

def hello_world():

    return 'Hello, World!'

In this case, we need to run gunicorn like so: gunicorn -w 4 flask_app:app And if your app is placed a bit deeper, the usual module notation is applicable: gunicorn -w 4 study_project.myapp:app here, the module myapp is placed in study_project module.

You can also specify an IP with a port with the following parameter: --bind 0.0.0.0:8000, but in our case default ones 127.0.0.1:80 are fine.

Nginx

Another server? Yes, in some cases, you will need a secure protocol for sensitive data transfer, handling status resources like HTML and CSS files, for a load balancer if you decide to scale instances of an application to a cluster or expect to receive a lot of requests — you need an Nginx server.

Here is a nginx.conf simple example for the first iteration of our application in a production environment:

server {
    listen 80;
    server_name _;
    location / {
        proxy_pass http://localhost:8000; 
    }
}

The nginx.conf that should be saved in /etc/nginx/nginx.conf on running Linux server consists of a listen directive with which you specify from which port our nginx server should accept requests, aka to which port on the server you expect requests will be sent. The next line is responsible for the name of the server. It does not do much when you work with a single server configuration in the config file.

The location block is responsible for routing all the requests that are coming to the path after that keyword, which in this case is the root sign: /This means that all the requests coming to the / path (that's all the requests since you don't have any other locations) will be forwarded to the following local IP with the provided port. Be careful here. You need to provide a port specified after the --bind keyword in the gunicorn command or a default value of 80 if --bind was not provided.

Configuration with docker

Nowadays, when deciding on deployment tactics, you can not and should not discard a tool that helps a ton with easing deployment, isolating the environment, platform independence, scalability and reliability.

This tool is called docker or docker-compose if you want to orchestrate multiple containers, for example, run our nginx and database in containers too. Let's see what that configuration would look like.

Flask application

First of all, you'll need a flask app. A simple one to test our docker configuration, let's take the 'Hello, World!' one from the previous example, add requirements.txt with used modules, put them in flask_app directory, and wrap it in Dockerfile (you will store these in the dockerfiles directory) like this:

from python:3.10  # here we use an image with preinstalled python. Its based on debian os

# setting default directory for project files.
# all following instructions will be executed relative to this path
WORKDIR /app

# copying requirements to be installed and rest of the application code into the container
COPY ./flask_app/requirements.txt /app/requirements.txt
COPY ./flask_app /

RUN pip install -r requirements.txt

# this directive is not functional more of a documentation purpose.
# to tell users about port in use by this container
EXPOSE 8000  

and the requirements.txt:

flask==2.3  
gunicorn==21.2

It's important to explicitly state the version because by default, the latest version is downloaded, and it can cause errors if the latest version of the module no longer supports some of the features used in our code.

You also need the compose file that will manage all of our containers. Typically, it is named docker-compose.yaml:

version: '3'

services:
  flask_app:
    build:
      context: .
      dockerfile: dockerfiles/Dockerfile_flask
    command: ["gunicorn", "app:app", "-b :8000"]
    ports:
      - "8000:8000"
    

In the compose file, you specify all containers inside the services section. The first one will be the flask application named flask_app. This name can be different, but it is important to note that it will be used for all communications between the containers as the usual domain name. For example, in the nginx config file.

For the build section, you specify a custom dockerfile name and path since you decided to deviate from the default name of Dockerfile. Hence dockerfiles/Dockerfile_flask. The command directive will execute after the container build. With this line, run gunicorn app object on port 8000, and at last, provide a port to the "outside" world so you can access it from our browser at localhost:8000. The way to pass arguments to the command may confuse the first-time reader, but it is the same call that you would do in the terminal but divided into the list of strings where every string has exactly one key-value pair. That's the simple working setup.

Nginx server

When you decide to add more containers alongside the main flask_app or add some more complicated routing, you will need a nginx container. For this, let's create dockerfile_nginx in the dockerfiles folder:

FROM nginx:1.25

RUN rm /etc/nginx/conf.d/default.conf
COPY ./nginx/nginx.conf /etc/nginx/conf.d

That is a simple dockerfile with an nginx image tag to build on and to remove the default configuration file, placing ours instead.

Now let's see what it will look like. It's pretty similar to the one in the previous chapter:

server {
    listen 80;
    server_name _;
    location / {
        proxy_pass http://flask_app:8000; 
    }
}

Here, you expect that the request will come at the default 80 port, and you will forward it to the URL stated after proxy_pass directive. Notice how we use a container name as a domain name and the port that gunicorn is listening to for proxying requests. A complete and overwhelming nginx guide can be found by this link on official documentation.

Now, you need to update our docker-compose file:

version: '3'

services:
  flask_app:
    build:
      context: .
      dockerfile: dockerfiles/Dockerfile_flask
    command: bash -c "gunicorn app:app -b :8000"
    
  nginx:
    build:
      context: .
      dockerfile: dockerfiles/Dockerfile_nginx
    ports:
      - "80:80"

We added nginx service pretty much as we did for the flask_app specified dockerfile name, and set open ports. You can see the alternative way to execute a command in the command field. Also, closed flask_app's ports so users can't request its endpoint directly via 8000 port, now only through nginx's:80 port. Now go to the docker-compose.yaml level and run docker-compose up. The application should build inside containers and launch, so you can access it by going to http://localhost in any browser.

That is the setup for a simple dockerised application. The final folder structure will look like this:

Project directory tree

Database

And last but not least – the database. We will take postgres as an example, but on the level of the topic, there's no particular difference as you are interested only in connections and containers. So, first of all, let's add a service to docker_compose file:

version: '3'

services:
  flask_app:
   ...
    command: bash -c "python db/init_db.py && gunicorn app:app -b :8000"
    depends_on:
      - db
    
  nginx:
    ...
    depends_on:
      - flask_app
  
  db:
    image: postgres:12
    ports:
    // we also can replace ports with expose keyword to make possible only internal connections,
    // but usually its nice to have access to the db from outside:
      - "5432:5432"
    environment:
      - POSTGRES_USER=postgres
      - POSTGRES_PASSWORD=postgres

Besides the database setup, we added depends_on field. This parameter controls the order of startup. It's becoming increasingly critical with the rising number of services as we already have nginx depending on flask_app, and now flask_app will depend on database service. Here, you won't use a dockerfile for the sake of the example, and you don't need any additional actions anyway. Pay attention to how environment variables are defined, like a standard parameter in - KEY=VALUE or KEY: VALUE format

We added a database initialization script to the command field of flask_app service. Let's review what is inside:

import os
import psycopg2


conn = psycopg2.connect(
        host="db",
        database="postgres",
        user='postgres',
        password='postgres')

# Open a cursor to perform database operations
cur = conn.cursor()

# Execute a command: this creates a new table
cur.execute('DROP TABLE IF EXISTS topic;')
cur.execute('CREATE TABLE topic (id serial PRIMARY KEY,'
                                 'title varchar (150) NOT NULL,'
                                 'author varchar (50) NOT NULL,'
                                 'content text,'
                                 'date_added date DEFAULT CURRENT_TIMESTAMP);'
                                 )

# Insert data into the table

cur.execute('INSERT INTO topic (title, author, content)'
            'VALUES (%s, %s, %s)',
            ('A Great FLASK',
             'anonimous publisher',
             'See this topic...')
            )

conn.commit()

cur.close()
conn.close()

This script connects to the database container by its name from docker-compose db, drops and recreates a table, and inserts one row afterward. There is a way to initiate postgres db in docker container that you can learn by visiting this link to the official docker postgres image documentation.

Environment variables

The important part of the production-ready setup is security, and that's why you can't leave passwords and usernames in the code. It is better to place them in separate files that will not be anywhere near the public eye. Read how to manage secrets in this topic or here for the way of handling by CI/CD system.

There is a convenient way for docker-compose to pick up environment variables automatically — you need to set parameter env_file of service, where the path and name of the file with variables will be passed as value, for example:

services:
  pg:
    image: postgres:13
    environment:
      - POSTGRES_HOST_AUTH_METHOD=trust
    env_file:
      - ../dev.env

The file should consist of a KEY=VALUE pair on each line. By commonly agreed practice, both key and value are also in CAPS. There are special functions when using quoted values. You can find more information by following this link. Then, these variables can be passed into the flask application.

Troubleshooting

During the deployment, update, or regular usage, you will most likely face some problems and hiccups. It will be helpful to see what was the application state before the issue appeared, and for this purpose, there is logging.

Logging

At our disposal are multiple levels of logs. Both gunicorn and flask have their own logging system, nginx, and docker too. Here you can read about nginx logging and look there for docker's. This topic will focus on python application loggers: gunicorn's and flask's.

The gunicorn utility accepts one more keyword that is responsible for logging and can be set to these values:

  • 'debug'
  • 'info'
  • 'warning'
  • 'error'
  • 'critical'

and can be used like this gunicorn app:app --log-level=debug -b :8000 or command: ["gunicorn", "app:app", "--log-level=debug", "-b :8000"] with compose.

That logger is used purely by gunicorn server. It will inform you about the state of the workers and critical errors.

To be able to see logger entries from the flask application itself, you need to set the internal logger parameters of the created Flask application like this app.logger.setLevel('debug'). You can wire these 2 logging systems so they have similar format and output configuration with the logging module:

from flask import Flask
import logging

app = Flask(__name__)


gunicorn_logger = logging.getLogger('gunicorn.error')
app.logger.handlers = gunicorn_logger.handlers
app.logger.setLevel(gunicorn_logger.level)


@app.route('/', methods=['POST'])
def create_topic():
    try:
        detail = database_module.create_topic()
    except psycopg2.Error as err:
        app.logger.error(f'There was an eeror in the create topic method {err}')
    finally:
        app.logger.info('Sending error notification')
        detail = communication_module.send_error_notification()
    return created_topic

or use an independent logger setting it up like this:

logging.basicConfig(level=logging.ERROR)

There are ways to customize logging, set up format, add SMTP for emailing notifications, and other additional features. You can read about them in the official documentation

Conclusion

Today, we addressed the production needs for flask application. Learned why you need a dedicated WSGI server and why you might need an nginx as a proxy. Saw how to set up both WSGI and nginx, and highlighted basic configuration options.

Suggested deployment configuration with docker and docker-compose, saw how to treat nginx, flask application, and database in containers, and how to allow them to communicate with each other.

Took a look at the logging possibilities both in WSGI scope and in flask application.

How did you like the theory?
Report a typo