- Talored for:
- DevOps,
- SRE,
- DevSecOps Engineers
- Intro,
- Key Concepts,
- Common Use Cases,
- Link to documentation,
- Project: Hands-On
- An EKS cluster refers to an Amazon Elastic Kubernetes Service (EKS) cluster;
- EKS cluster is a managed Kubernetes service provided by Amazon Web Services (AWS):
- EKS cluster makes it easy to:
- Deploy,
- Manage,
- And scale containerized applications using Kubernetes.
- AWS handles the management of :
- the Kubernetes control plane, including the API server nodes and backend persistence layer, allowing users to focus on application deployment and management of the data plane (worker nodes).
- AWS manages and scales the Kubernetes control plane across multiple Availability Zones to ensure high availability and durability.
- Users can provision and manage worker nodes using Amazon EC2 instances (self-managed or managed node groups) or use the serverless compute option with AWS Fargate.
- EKS seamlessly integrates with other AWS services for networking (Amazon VPC), monitoring (Amazon CloudWatch), load balancing (ELB, ALB), and identity and access management (IAM).
- EKS provides robust security features, including IAM integration for authentication, network policies, and audit logging to CloudWatch.
- Clusters can be created and managed using various tools such as the AWS Management Console, AWS CLI, eksctl (a simple CLI for EKS), AWS CloudFormation, or Infrastructure as Code (IaC) tools like Terraform.
- EKS is used in various scenarios where a scalable, reliable, and secure Kubernetes environment is needed:
- Running highly available microservices by leveraging load balancing and auto-scaling.
- Using EKS Anywhere to run EKS clusters in on-premises data centers for a consistent hybrid cloud experience.
- Running ML workloads that require specific compute resources and scaling capabilities.
- As a robust platform for deploying and running continuous integration and continuous delivery pipelines.
Link to documentation
https://docs.aws.amazon.com/eks/
- How twtech uses terraform (IaC) to provision EKS infrastructure in aws
- While bootstrapping all the dependencies
- From Visual Studio Code, create a .tf file with define resources & values.
- Configure all the files in the module with appropriate values for resources that would be provision.
- Connect (ssh) to instance and verify that all the necessary packages that were bootstrapped.
- Provision twtech-EKS-Cluster in Cloud with command line(CLI).
- Values should be configured to match the region and expected name, node type, and number of nodes
- This should take about 10 to 15 minustes to fully provision the EKS resource defines
eksctl create cluster --name twtech-eks --region us-east-2 --nodegroup-name twtnode --node-type t3.medium --managed --nodes 2
Step-4:
- Verify that the cluster is successfully provision and running seamslessly
- The following command should confirm that EKS cluster is up and running.
eksctl get cluster --name twtech-eks --region us-east-2
Step-:5
Update configuration file by entering below command:
aws eks update-kubeconfig --name twtech-eks --region us-east-2
step-6:
- List the all nodes created, to verify that the nodes are up and running.
kubectl get node
- Deploy test applications with PVC,PV, and Storage-Class (MongoDB)
- link found on twtech github-pub-repository:
- nono (vi or vim) into a file and create a manifest file:
sudo vi app-pvc-pv-sc-svc.yaml
kubectl apply -f app-pvc-pv-sc-svc.yaml
kubectl get all
Step-10:
- List all resources created in all
eks namespaces:
kubectl get all -A
Step-11:
- List the pvc created:
kubectl get pvc
Step-12:
- List the pv created:
kubectl get pv
kubectl get sc
kubectl get svc
- Get the PubIP of the worker node: GUI
10.191.xxx.394:31400
- Browse the application: firewall to application restrict access. port must be opened to allow traffic access from N-users
Step-16:
- Go to Security group (firewall) and open just the required ports , for security reasons on the worker nodes:
From:
To:
- Save changes:
Step-17:
- Go back and refresh the application page:
twtech-spring-boot-mongo
Step-18:
- Try to populate with data and see whether data will be saved on the twtech-db:
Step-19:
- Accessing twtech-webpp also provisioned:
NB:
- The path to the application is /twtech
10.191.158.190:31200/twtech
Verify that all resources Resources referenced are provisioned with the eks-cluster using:
Go to AWS CloudFormation:
Two CloudFormation stacks for:
twtech-eks-nodegroup(s) (EKS Managed Nodes (SSH access: false) [created by eksctl)
twtech-eks-cluster (EKS cluster (dedicated VPC: true, dedicated IAM: true) [created
and managed by eksctl)
Step-21:
- How twtech deletes the eks cluster and all its resources not longer needed: With the command line.
eksctl delete cluster --name twtech-eks --region us-east-2
No comments:
Post a Comment