On this article, we’re going to deploy an Airflow utility in a Conda atmosphere and safe the applying utilizing Nginx and request SSL certificates from Let’s Encrypt.

Airflow is a well-liked device that we will use to outline, schedule, and monitor our advanced workflows. We will create Directed Acyclic Graphs (DAGs) to automate duties throughout our work platforms, and being open-source, Airflow has a group to offer assist and enhance repeatedly.

It is a sponsored article by Vultr. Vultr is the world’s largest privately-held cloud computing platform. A favourite with builders, Vultr has served over 1.5 million prospects throughout 185 nations with versatile, scalable, world Cloud Compute, Cloud GPU, Naked Metallic, and Cloud Storage options. Study extra about Vultr.

Deploying a Server on Vultr

Let’s begin by deploying a Vultr server with the Anaconda market utility.

  1. Join and log in to the Vultr Customer Portal.

  2. Navigate to the Merchandise web page.

  3. Choose Compute from the facet menu.

  4. Click on Deploy Server.

  5. Choose Cloud Compute because the server kind.

  6. Select a Location.

  7. Choose Anaconda amongst market purposes.

  8. Select a Plan.

  9. Choose any extra options as required within the “Extra Options” part.

  10. Click on the Deploy Now button.

    Vultr server deploy button

Making a Vultr Managed Database

After deploying a Vultr server, we’ll subsequent deploy a Vultr-managed PostgreSQL Database. We’ll additionally create two new databases in our database occasion that will probably be used to attach with our Airflow utility later within the weblog.

  1. Open the Vultr Customer Portal.

  2. Click on the Merchandise menu group and navigate to Databases to create a PostgreSQL managed database.

    Vultr Database products menu button

  3. Click on Add Managed Databases.

  4. Choose PostgreSQL with the newest model because the database engine.

    Vultr managed PostgreSQL selection

  5. Choose Server Configuration and Server Location.

  6. Write a Label for the service.

    Label button managed database

  7. Click on Deploy Now.

    Vultr managed database deploy button

  8. After the database is deployed, choose Customers & Databases.

    Vultr managed database users and database section

  9. Click on Add New Database.

  10. Kind in a reputation, click on Add Database and identify it airflow-pgsql.

  11. Repeat steps 9 and 10 so as to add one other database in the identical managed database and identify it airflow-celery.

Getting Began with Conda and Airflow

Now that we’ve created a Vultr-managed PostgreSQL occasion, we’ll use the Vultr server to create a Conda atmosphere and set up the required dependencies.

  1. Examine for the Conda model:

  2. Create a Conda atmosphere:

    $ conda create -n airflow python=3.8
  3. Activate the atmosphere:

  4. Set up Redis server:

    (airflow) $ apt set up -y redis-server
  5. Allow the Redis server:

    (airflow) $ sudo systemctl allow redis-server
  6. Examine the standing:

    (airflow) $ sudo systemctl standing redis-server

    Redis server status check

  7. Set up the Python bundle supervisor:

    (airflow) $ conda set up pip
  8. Set up the required dependencies:

    (airflow) $ pip set up psycopg2-binary virtualenv redis
  9. Set up Airflow within the Conda atmosphere:

    (airflow) $ pip set up "apache-airflow[celery]==2.8.1" --constraint "https://uncooked.githubusercontent.com/apache/airflow/constraints-2.8.1/constraints-3.8.txt"

Connecting Airflow with Vultr Managed Database

After making ready the atmosphere, now let’s join our Airflow utility with the 2 databases we created earlier inside our database occasion and make essential adjustments to the Airflow configuration to make our utility production-ready.

  1. Set atmosphere variable for database connection:

    (airflow) $ export AIRFLOW__DATABASE__SQL_ALCHEMY_CONN="postgresql://person:password@hostname:port/db_name"

    Be sure that to interchange the person, password, hostname, and port with the precise values within the connection particulars part by choosing the airflow-pgsql database. Change the db_name with airflow-pgsql.

    airflow-pgsql database credential selection

  2. Initialize the metadata database.

    We should initialize a metadata database for Airflow to create essential tables and schema that shops info like DAGs and data associated to our workflows:

    (airflow) $ airflow db init
  3. Open the Airflow configuration file:

    (airflow) $ sudo nano ~/airflow/airflow.cfg
  4. Scroll down and alter the executor:

    executor = CeleryExecutor
  5. Hyperlink the Vultr-managed PostgreSQL database, and alter the worth of sql_alchemy_conn:

    sql_alchemy_conn = "postgresql://person:password@hostname:port/db_name"

    Be sure that to interchange the person, password, hostname, and port with the precise values within the connection particulars part by choosing the airflow-pgsql database. Change the db_name with airflow-pgsql.

  6. Scroll down and alter the employee and set off log ports:

    worker_log_server_port = 8794
    trigger_log_server_port = 8795
  7. Change the broker_url:

    broker_url = redis://localhost:6379/0
  8. Take away the # and alter the result_backend:

    result_backend = db+postgresql://person:password@hostname:port/db_name

    Be sure that to interchange the person, password, hostname, and port with the precise values within the connection particulars part by choosing the airflow-celery database. Change the db_name with airflow-celery.

    airflow-celery database credential selection

  9. Save and exit the file.

  10. Create an Airflow person:

    (airflow) $ airflow customers create n    --username admin n    --firstname Peter n    --lastname Parker n    --role Admin n    --email [email protected]

    Be sure that to interchange all of the variable values with the precise values.

    Enter a password when prompted to set it for the person whereas accessing the dashboard.

Daemonizing the Airflow Software

Now let’s daemonize our Airflow utility in order that it runs within the background and continues to run independently even after we shut the terminal and log off.

These steps can even assist us to create a persistent service for the Airflow webserver, scheduler, and celery staff.

  1. View the airflow path:

    (airflow) $ which airflow

    Copy and paste the trail into the clipboard.

  2. Create an Airflow webserver service file:

    (airflow) $ sudo nano /and many others/systemd/system/airflow-webserver.service
  3. Paste the service configurations within the file.

    airflow webserver is liable for offering a web-based person interface that can permit us to work together and handle our workflows. These configurations will make a background working service for our Airflow webserver:

    [Unit]
    Description="Airflow Webserver"
    After=community.goal
    
    [Service]
    Person=example_user
    Group=example_user
    ExecStart=/house/example_user/.native/bin/airflow webserver
    
    [Install]
    WantedBy=multi-user.goal

    Be sure that to interchange Person and Group along with your precise non-root sudo person account particulars, and substitute the ExecStart path with the precise Airflow path together with the executable binary we copied earlier within the clipboard.

  4. Save and shut the file.

  5. Allow the airflow-webserver service, in order that the webserver routinely begins up throughout the system boot course of:

    (airflow) $ systemctl allow airflow-webserver
  6. Begin the service:

    (airflow) $ sudo systemctl begin airflow-webserver
  7. Make it possible for the service is up and working:

    (airflow) $ sudo systemctl standing airflow-webserver

    Our output ought to seem just like the one pictured under.

    airflow-webserver service status check

  8. Create an Airflow Celery service file:

    (airflow) $ sudo nano /and many others/systemd/system/airflow-celery.service
  9. Paste the service configurations within the file.

    airflow celery employee begins a Celery employee. Celery is a distributed process queue that can permit us to distribute and execute duties throughout a number of staff. The employees hook up with our Redis server to obtain and execute duties:

    [Unit]
    Description="Airflow Celery"
    After=community.goal
    
    [Service]
    Person=example_user
    Group=example_user
    ExecStart=/house/example_user/.native/bin/airflow celery employee
    
    [Install]
    WantedBy=multi-user.goal

    Be sure that to interchange Person and Group along with your precise non-root sudo person account particulars, and substitute the ExecStart path with the precise Airflow path together with the executable binary we copied earlier within the clipboard.

  10. Save and shut the file.

  11. Allow the airflow-celery service:

    (airflow) $ sudo systemctl allow airflow-celery
  12. Begin the service:

    (airflow) $ sudo systemctl begin airflow-celery
  13. Make it possible for the service is up and working:

    (airflow) $ sudo systemctl standing airflow-celery
  14. Create an Airflow scheduler service file:

    (airflow) $ sudo nano /and many others/systemd/system/airflow-scheduler.service
  15. Paste the service configurations within the file.

    airflow scheduler is liable for scheduling and triggering the DAGs and the duties outlined in them. It additionally checks the standing of DAGs and duties periodically:

    [Unit]
    Description="Airflow Scheduler"
    After=community.goal
    
    [Service]
    Person=example_user
    Group=example_user
    ExecStart=/house/example_user/.native/bin/airflow scheduler
    
    [Install]
    WantedBy=multi-user.goal

    Be sure that to interchange Person and Group along with your precise non-root sudo person account particulars, and substitute the ExecStart path with the precise Airflow path together with the executable binary we copied earlier within the clipboard.

  16. Save and shut the file.

  17. Allow the airflow-scheduler service:

    (airflow) $ sudo systemctl allow airflow-scheduler
  18. Begin the service:

    (airflow) $ sudo systemctl begin airflow-scheduler
  19. Make it possible for the service is up and working:

    (airflow) $ sudo systemctl standing airflow-scheduler

    Our output ought to seem like that pictured under.

    airflow-scheduler service status check

Organising Nginx as a Reverse Proxy

We’ve created persistent providers for the Airflow utility, so now we’ll arrange Nginx as a reverse proxy to boost our utility’s safety and scalability following the steps outlined under.

  1. Log in to the Vultr Customer Portal.

  2. Navigate to the Merchandise web page.

  3. From the facet menu, develop the Community drop down, and choose DNS.

  4. Click on the Add Area button within the heart.

  5. Observe the setup process so as to add your area identify by choosing the IP tackle of your server.

  6. Set the next hostnames as your area’s major and secondary nameservers along with your area registrar:

    • ns1.vultr.com
    • ns2.vultr.com
  7. Set up Nginx:

    (airflow) $ apt set up nginx
  8. Be sure that to test if the Nginx server is up and working:

    (airflow) $ sudo systemctl standing nginx
  9. Create a brand new Nginx digital host configuration file within the sites-available listing:

    (airflow) $ sudo nano /and many others/nginx/sites-available/airflow.conf
  10. Add the configurations to the file.

    These configurations will direct the visitors on our utility from the precise area to the backend server at http://127.0.0.1:8080 utilizing a proxy move:

    server {
    
        hear 80;
        hear [::]:80;
        server_name airflow.instance.com;
    
        location / {
            proxy_pass http://127.0.0.1:8080;  
        }
    
    }

    Be sure that to interchange airflow.instance.com with the precise area we added within the Vultr dashboard.

  11. Save and shut the file.

  12. Hyperlink the configuration file to the sites-enabled listing to activate the configuration file:

    (airflow) $ sudo ln -s /and many others/nginx/sites-available/airflow.conf /and many others/nginx/sites-enabled/
  13. Be sure that to test the configuration for errors:

    (airflow) $ sudo nginx -t

    Our output ought to seem like that pictured under.

    nginx configuration check

  14. Restart Nginx to use adjustments:

    (airflow) $ sudo systemctl reload nginx
  15. Enable the HTTP port 80 via the firewall for all of the incoming connections:

    (airflow) $ sudo ufw permit 80/tcp
  16. Enable the HTTPS port 443 via the firewall for all incoming connections:

    (airflow) $ sudo ufw permit 443/tcp
  17. Reload firewall guidelines to avoid wasting adjustments:

    (airflow) $ sudo ufw reload

Making use of Let’s Encrypt SSL Certificates to the Airflow Software

The final step is to use a Let’s Encrypt SSL Certificates to our Airflow utility in order that it turns into way more safe and saves our utility from undesirable assaults.

  1. Utilizing Snap, set up the Certbot Let’s Encrypt shopper:

    (airflow) $ snap set up --classic certbot
  2. Get a brand new SSL certificates for our area:

    (airflow) $ certbot --nginx -d airflow.instance.com

    Be sure that to interchange airflow.instance.com with our precise area identify.
    And when prompted enter an e mail tackle and press Y to simply accept the Let’s Encrypt phrases.

  3. Take a look at that the SSL certificates auto-renews upon expiry.

    Auto-renewal makes certain our SSL certificates are updated, decreasing the chance of certificates expiry and sustaining the safety of our utility:

    (airflow) $ certbot renew --dry-run
  4. Use an internet browser to open our Airflow utility: https://airflow.instance.com.

    When prompted, enter the username and password we created earlier.

    airflow dashboard login

    Upon accessing the dashboard, all of the DAGs will probably be seen which can be supplied by default.

    airflow dashboard

Conclusion

On this article, we demonstrated methods to create Conda environments, deploy a production-ready Airflow utility, and enhance the efficiency and safety of an utility.