Auto-Scaling Directus CMS System

 Directus Flow sample layout


Overview

This document outlines a cutting-edge solution for implementing an auto-scaling Directus CMS system using Kubernetes on Linode. By leveraging containerization, Kubernetes orchestration, and a centralised database system, this solution ensures high availability, scalability, and reliability. Key components include Directus instances, a MariaDB database server, a load balancer, and automated scaling to meet demand.

Architecture

Components


Directus Instances:
  • - Multiple containerized Directus CMS instances are deployed as Kubernetes pods.
  • - Each instance connects to a central MariaDB database for data read/write operations.

MariaDB Server:
  • - A single MariaDB instance is hosted on a separate server to handle all database operations.
  • - Configured for regular backups and synchronised with a data warehouse for analytics.

Kubernetes Cluster:
  • - Manages the deployment, scaling, and operation of Directus instances.
  • - A Kubernetes Service acts as a load balancer to distribute incoming traffic.

Linode Hosting:
  • - Provides virtual machines for hosting Kubernetes nodes, Directus instances, and the MariaDB server.
  • - Utilises Linode's NodeBalancer service to handle load balancing between Directus instances.

 Autoscaling directus with kubernetes gif


Kubernetes Deployment Files

Directus Deployment


This deployment file manages the Directus CMS instances as Kubernetes pods. It configures 4 replicas of the Directus pod using the directus/directus:latest image. Each pod is set to connect to the central MariaDB database using environment variables. The service also exposes the Directus deployment as a LoadBalancer service, mapping external port 80 to the container port 8055, and distributes traffic to the pods labelled app: directus.


                apiVersion: apps/v1
                kind: Deployment
                metadata:
                name: directus-deployment
                spec:
                replicas: 4
                selector:
                    matchLabels:
                    app: directus
                template:
                    metadata:
                    labels:
                        app: directus
                    spec:
                    containers:
                    - name: directus
                        image: directus/directus:latest
                        ports:
                        - containerPort: 8055
                        env:
                        - name: DATABASE_CLIENT
                        value: "mariadb"
                        - name: DATABASE_HOST
                        value: ""
                        - name: DATABASE_PORT
                        value: "3306"
                        - name: DATABASE_NAME
                        value: "directus"
                        - name: DATABASE_USER
                        value: "directus_user"
                        - name: DATABASE_PASSWORD
                        value: "directus_password"
                ---
                apiVersion: v1
                kind: Service
                metadata:
                name: directus-service
                spec:
                type: LoadBalancer
                selector:
                    app: directus
                ports:
                - port: 80
                    targetPort: 8055
              
              

Horizontal Pod Autoscaler


The Horizontal Pod Autoscaler (HPA), named directus-hpa, dynamically adjusts the number of pods in the directus-deployment based on real-time CPU and memory usage. It ensures a minimum of 3 replicas and can scale up to a maximum of 10 replicas. The HPA aims to maintain average CPU and memory utilisation at 70%, optimising resource usage and application performance.


                apiVersion: autoscaling/v2beta2
                kind: HorizontalPodAutoscaler
                metadata:
                  name: directus-hpa
                spec:
                  scaleTargetRef:
                    apiVersion: apps/v1
                    kind: Deployment
                    name: directus-deployment
                  minReplicas: 3
                  maxReplicas: 10
                  metrics:
                  - type: Resource
                    resource:
                      name: cpu
                      target:
                        type: Utilization
                        averageUtilization: 70
                  - type: Resource
                    resource:
                      name: memory
                      target:
                        type: Utilization
                        averageUtilization: 70   
              

Benefits


  • - Scalability: Dynamically scales the service in response to increased demand, ensuring optimal performance.
  • - High Availability: Ensures continuous availability of Directus instances with even traffic distribution by the load balancer.
  • - Data Integrity: The centralised MariaDB server maintains consistent data integrity through regular backups and synchronisation with the data warehouse.
  • - Cost Efficiency: Unlike a monolithic architecture, this implementation scales with demand, allocating resources based on actual usage and minimising unnecessary costs.
  • - Optimised Resource Management: Kubernetes handles the uptime, supply, and demand of resources and servers to provide the most optimal experience for users. By automatically managing the infrastructure, Kubernetes ensures that resources are allocated efficiently, reducing downtime and maintaining high performance. This allows DevOps teams to focus on improving and enhancing the service rather than managing the underlying infrastructure.

By implementing this solution, your organisation will benefit from a robust, scalable, and cost-effective Directus CMS system that meets growing business demands efficiently.