Building a robust application (Introduction II)

Alex Izuka
8 min readNov 11, 2023

I began a series on building a robust application; handling the infrastructure aspect. I will continue further with more introduction in this article, building on the existing discussion in the first series.

Blue-green deployment

Blue-green deployment is a deployment strategy that aims to minimize downtime and reduce risk by maintaining two identical environments (blue and green) and directing traffic between them during deployment. Below are what you should consider when implementing blue-green deployment strategies in both AWS and Azure:

Blue-Green Deployment in AWS

  1. Elastic Load Balancer (ELB): Use AWS Elastic Load Balancer (ELB) to distribute traffic between the blue and green environments. Configure the ELB to route traffic to the active (live) environment.
  2. Auto Scaling Groups: Utilize Auto Scaling Groups to manage and maintain instances in both the blue and green environments. Adjust the desired capacity of the Auto Scaling Groups to control the number of instances in each environment.
  3. Amazon Route 53: Use Amazon Route 53 to manage DNS and implement a weighted routing policy. Gradually shift traffic from the blue environment to the green environment by adjusting the weights.
  4. Environment Parameterization: Parameterize your application and infrastructure configurations to ensure flexibility during blue-green deployments. This includes configuration files, environment variables, and database connection strings.
  5. Database Considerations: If your application uses a database, consider database schema changes and data migration during the blue-green deployment. Tools like AWS Database Migration Service (DMS) can help with this process.

Blue-green deployment strategies in Amazon EKS

  1. Namespace Separation: Use Kubernetes namespaces to separate your blue and green environments. This ensures isolation and simplifies the management of resources for each environment.
  2. Deploying Multiple Versions: Deploy the blue and green versions of your application as separate Kubernetes deployments within their respective namespaces.
  3. Service Configuration: Create Kubernetes Services for your applications, and configure them to expose the services to the external world. Use a load balancer or Ingress controller to manage external access.
  4. Route Traffic with Ingress: Use Kubernetes Ingress to control traffic routing. Gradually shift traffic from the blue version to the green version by updating the Ingress rules.
  5. Namespace Swap: Once you are satisfied with the deployment, you can perform a namespace swap to switch the blue and green environments. This involves updating the Ingress rules to point to the other namespace.
  6. Rollback: In case of issues, rollback is simplified by updating the Ingress rules to revert back to the previous namespace, effectively rolling back to the previous version.

Blue-Green Deployment in Azure

  1. Azure Traffic Manager: Use Azure Traffic Manager to distribute traffic between the blue and green environments. Implement a weighted routing method to gradually shift traffic from one environment to another.
  2. Azure App Services: If you are using Azure App Services, deploy your application in separate app service plans or web apps for the blue and green environments.
  3. Azure Traffic Manager Probing: Configure health probes in Azure Traffic Manager to monitor the health of instances in both environments. This ensures that traffic is directed to healthy instances.
  4. Deployment Slots: If your application is hosted in Azure App Services, leverage deployment slots to stage and swap your application between environments seamlessly.
  5. Azure SQL Database Considerations: If your application uses Azure SQL Database, plan for schema changes and data migration. Azure provides tools like Azure Database Migration Service to assist with this process.
  6. Environment Parameterization: Parameterize your application configurations and settings to allow flexibility during blue-green deployments. This may include environment variables, configuration files, and application settings.

Blue-green deployment strategies in Azure AKS

  1. Multiple AKS Clusters: Create separate AKS clusters for your blue and green environments. This ensures isolation and simplifies the management of resources for each environment.
  2. Deploying Multiple Versions: Deploy the blue and green versions of your application as separate Kubernetes deployments within their respective AKS clusters.
  3. Service Configuration: Create Kubernetes Services for your applications, and expose them using Azure Load Balancer or Ingress controllers.
  4. Azure Traffic Manager: Use Azure Traffic Manager to manage DNS-based traffic routing. Configure Traffic Manager to route traffic between the blue and green AKS clusters.
  5. Gradual Traffic Shift: Gradually shift traffic from the blue version to the green version using Traffic Manager. This can be achieved by adjusting the traffic distribution weights.
  6. Rollback: In case of issues, rollback is simplified by adjusting the Traffic Manager configuration to route all traffic back to the previous AKS cluster.

Failover strategies

Implementing failover strategies is crucial for maintaining high availability and ensuring business continuity in cloud environments. Both AWS and Azure offer services and features to help you design and implement effective failover strategies. Below are key considerations for implementing failover strategies in both cloud platforms.

AWS

  1. Amazon Route 53: Use Amazon Route 53 for DNS failover. Route 53 can automatically route traffic to healthy endpoints based on health checks, helping with failover between regions or instances.
  2. Elastic Load Balancer (ELB): Leverage ELB for distributing traffic across multiple instances or across different availability zones. ELB can automatically detect unhealthy instances and reroute traffic to healthy ones.
  3. Amazon RDS Multi-AZ: For database failover, use Amazon RDS Multi-AZ deployments. This feature automatically replicates your database to a standby instance in a different availability zone, enabling automatic failover in case of a primary database failure.
  4. Amazon S3 Cross-Region Replication: If your application relies on Amazon S3, implement cross-region replication to replicate data to a different region for disaster recovery and failover.
  5. Auto Scaling Groups: Use Auto Scaling Groups to automatically adjust the number of instances in response to demand. This helps in maintaining application availability and scaling resources based on traffic patterns.
  6. AWS Global Accelerator: AWS Global Accelerator allows you to allocate static Anycast IP addresses to your application endpoints, providing a single entry point for your application across multiple AWS regions.
  7. Amazon Aurora Global Databases: If using Amazon Aurora for databases, consider using Global Databases to replicate databases across regions for read scalability and failover.

Failover Strategies in Amazon EKS

  1. Multi-AZ Clusters: Deploy your EKS clusters across multiple Availability Zones (AZs) to ensure high availability. EKS automatically distributes your Kubernetes control plane across multiple AZs.
  2. Node Group Strategies: Utilize multiple node groups in different AZs for your worker nodes. This provides redundancy and ensures that your applications continue to run even if one AZ experiences issues.
  3. Pod Distribution: Deploy your application pods across multiple nodes and AZs to distribute the workload. Kubernetes itself manages the distribution of pods based on resource availability and constraints.
  4. Load Balancers: Use Amazon Elastic Load Balancer (ELB) or Application Load Balancer (ALB) to distribute traffic across nodes and AZs. ELB and ALB automatically handle failover and route traffic to healthy instances.
  5. EKS Managed Node Groups: Consider using EKS managed node groups, which simplify the deployment and management of worker nodes. EKS manages the health and availability of these node groups.

Azure

  1. Azure Traffic Manager: Utilize Azure Traffic Manager for DNS-based traffic routing. It can distribute traffic across multiple regions or endpoints and automatically redirect users to healthy instances.
  2. Azure Load Balancer: Use Azure Load Balancer to distribute incoming network traffic across multiple VM instances within a single region. It provides load balancing and supports availability sets for high availability.
  3. Azure Application Gateway: For web applications, consider using Azure Application Gateway, which provides layer 7 (application layer) load balancing and supports backend pool configurations for failover).
  4. Azure Traffic Manager with Azure App Service Environments: For highly scalable and available web applications hosted in Azure App Service Environments, combine Azure Traffic Manager for DNS-based routing with multiple instances of App Service Environments for failover.
  5. Azure Site Recovery: Implement Azure Site Recovery for disaster recovery and failover of on-premises or Azure VMs. It provides replication and automated failover in case of a region-wide or site-wide outage.
  6. Azure Database Geo-Replication: For database failover, use Azure Database Geo-Replication. This allows you to replicate databases to different regions, providing a secondary read-only copy and enabling failover in case of a primary database failure.
  7. Azure Virtual Machine Scale Sets: Use Azure Virtual Machine Scale Sets to automatically scale the number of VM instances based on demand. It helps maintain application availability and distribute traffic.

Failover Strategies in Azure AKS

  1. Availability Zones: Deploy your AKS clusters across multiple Availability Zones to achieve high availability. AKS provides the option to distribute your nodes across multiple zones.
  2. Node Pool Strategies: Create multiple node pools in different Availability Zones for your AKS clusters. This provides redundancy and ensures that your applications remain available in the event of a failure.
  3. Pod Distribution: Deploy your application pods across multiple nodes and zones to distribute the workload. Kubernetes automatically schedules pods based on resource constraints and availability.
  4. Auto Scaling: Use the AKS cluster autoscaler to automatically adjust the number of nodes based on demand. This helps maintain the desired level of resources and availability.
  5. Azure Load Balancer: Leverage Azure Load Balancer to distribute traffic across nodes and zones. Azure Load Balancer handles failover and redirects traffic to healthy instances.
  6. Node Pools and Spot Instances: Consider using node pools with Azure Spot VMs for cost savings. In the event of Spot VM termination, AKS automatically redistributes workloads to other nodes.
  7. Azure Traffic Manager: Use Azure Traffic Manager for DNS-based traffic routing across multiple AKS clusters in different regions. This provides global load balancing and failover capabilities.

Backup and Data Storage

Implementing backup and data storage strategies for containerized applications in cloud environments like AWS (Amazon Web Services), Azure, and EKS (Elastic Kubernetes Service) or AKS (Azure Kubernetes Service) involves considering various aspects such as data persistence, backup mechanisms, and disaster recovery. Below are guidelines for each platform.

AWS and EKS

  1. Amazon EBS Volumes: For data persistence in AWS, use Amazon EBS (Elastic Block Store) volumes to store critical data. Attach EBS volumes to your EC2 instances or pods in EKS.
  2. Amazon RDS or Aurora: For managed database services, consider using Amazon RDS (Relational Database Service) or Amazon Aurora. These services provide automated backups, snapshots, and multi-AZ deployment for high availability.
  3. Amazon S3 for Object Storage: For object storage needs, use Amazon S3 (Simple Storage Service). Store static assets, backups, and other data in S3 buckets. Enable versioning for added data protection.
  4. EKS Persistent Volumes (PVs) and Persistent Volume Claims (PVCs): In EKS, leverage Kubernetes Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) to manage storage for your applications. Map these volumes to EBS volumes for data persistence.
  5. AWS Backup: Use AWS Backup to centrally manage and automate backup tasks. AWS Backup supports various AWS services, including EBS volumes, RDS databases, and more.
  6. Lifecycle Policies for S3: Implement lifecycle policies for Amazon S3 to automatically transition older versions of objects to cheaper storage classes or delete them after a specified period.

Azure and AKS

  1. Azure Disks: Use Azure Managed Disks for persistent storage in Azure. Attach managed disks to your Azure VMs or pods in AKS to ensure data persistence.
  2. Azure Blob Storage: For object storage, utilize Azure Blob Storage. Store backups, static files, and other data in Blob Storage containers. Enable versioning for additional data protection.
  3. Azure SQL Database or Cosmos DB: For managed databases, consider using Azure SQL Database or Cosmos DB. Both services provide automated backups, geo-replication, and high-availability features.
  4. Azure Files or Azure NetApp Files: Leverage Azure Files for shared file storage or Azure NetApp Files for more advanced file storage needs. Map these storage solutions to your AKS pods.
  5. Azure Backup: Use Azure Backup to automate and manage backups for various Azure services, including VMs, databases, and file shares. Set up backup policies and retention periods.
  6. Lifecycle Management for Blob Storage: Implement Azure Blob Storage lifecycle management to automatically move older data to cool or archive storage tiers or delete data based on defined policies.

Conclusion

Remember, employing these measures is up to how your system is designed, you may not need to employ all as they are generic, but applying most of them will boost the robustness of your infrastructure.

--

--