Skip to main content
  1. 🔰Posts/
  2. 🗂️My Trainings/
  3. Amazon Elastic Kubernetes Service/
  4. Terraform on AWS EKS Kubernetes IaC SRE/

Pre-requisites for building EKS Cluster in AWS

📚 Part 1 of 2: "AWS EKS" series.

·722 words·4 mins

EKS Cluster creation Options using Terraform #

© Kalyan Reddy Daida, StackSimplify

EKS Cluster can be created using both, Terraform Resources or Terraform Module from Terraform Public Registry.
My Terraform code for this series: https://github.com/rtdevx/iac-terraform-aws-eks

Pre-requisites for building EKS Cluster #

© Kalyan Reddy Daida, StackSimplify

Pre-requisite Resources #

  • Subnets - Public and Private
  • Route Tables
  • NAT Gateway + Elastic IP
  • Internet Gateway

EKS Resources #

  • EKS Cluster
    • EKS Cluster IAM Role
    • EKS Cluster Security Group (Attached to ENI)
    • EKS Cluster Network Interfaces (ENI)
    • EKS Cluster
  • EKS Node Group
    • EKS Node Group IAM Role
    • EKS Node Group Security Group (Attached to ENI)
    • EKS Node Group Network Interfaces (ENI)
    • EKS Worker Nodes EC2 Instances

© Kalyan Reddy Daida, StackSimplify

Note:

  • EKS Control Plane is managed by Amazon and is being built in separate VPC and under separate (Amazon’s) account.
  • Communication between the Control Plane is via EKS ENI (Elastic Network Interfaces) and controlled via Security Groups.

Note:

  • Bastion Host is optional and will not be build in my example. Access to EC2 instances will be granted via SSM Session Manager and restricted to admins (or any other user group, as appropriate).

Note:

  • Both, Public and Private Worker Nodes are communicating with the Control Plane via the Internet (inside of AWS Cloud).

© Kalyan Reddy Daida, StackSimplify

Elastic Network Interface (ENI) #

When EKS Cluster is created, AWS creates ENI’s that have EKS Cluster name in their description.

Those Network Interfaces are created in our VPC, under our AWS account and they allow AWS Fargate and EC2 instances communicating with EKS Control Plane that lives in a separate VPC, in Amazon’s-owned Account.

The Amazon EKS also creates cluster Security Group attached to ENI’s.

© Kalyan Reddy Daida, StackSimplify

kubectl #

Installation #

You must use a kubectl version that is within one minor version difference of your Amazon EKS cluster control plane. For example, a 1.34 kubectl client works with Kubernetes 1.33, 1.34, and 1.35 clusters.

Windows (curl) #

curl.exe -LO "https://dl.k8s.io/release/v1.35.0/bin/windows/amd64/kubectl.exe"

kubectl version --client

Detailed kubectl installation steps for Windows, Linux and MacOS here:

Optionally set alias for kubectl (Windows):

Set-Alias -Name k -Value kubectlCreates an alias named k for the kubectl command.
Get-Alias -Name kDisplays the current alias for k.
Set-Alias -Name k -Value another-valueChanges the k alias to point to another-value.

Configuration #

# Configure kubeconfig for kubectl
aws eks --region <region-code> update-kubeconfig --name <cluster_name>
aws eks --region eu-central-1 update-kubeconfig --name ops-dev-eks-ekscluster

# List Worker Nodes
kubectl get nodes
kubectl get nodes -o wide

# Verify Services
kubectl get svc

Verify Namespaces and Resources in Namespaces #

# Verify Namespaces
kubectl get namespaces
kubectl get ns
 
Observation: namespaces will be listed by default
1. kube-node-lease
2. kube-public
3. default
4. kube-system

# Verify Resources in kube-node-lease namespace
kubectl get all -n kube-node-lease

# Verify Resources in kube-public namespace
kubectl get all -n kube-public

# Verify Resources in default namespace
kubectl get all -n default

Observation: 
1. Kubernetes Service: Cluster IP Service for Kubernetes Endpoint

# Verify Resources in kube-system namespace
kubectl get all -n kube-system

Observation: 
1. Kubernetes Deployment: coredns
2. Kubernetes DaemonSet: aws-node, kube-proxy
3. Kubernetes Service: kube-dns
4. Kubernetes Pods: coredns, aws-node, kube-proxy

Verify pods in kube-system namespace #

# Verify System pods in kube-system namespace
kubectl get pods # Nothing in default namespace
kubectl get pods -n kube-system
kubectl get pods -n kube-system -o wide

# Verify Daemon Sets in kube-system namespace
kubectl get ds -n kube-system

Observation: The below two daemonsets will be running
1. aws-node
2. kube-proxy

# Describe aws-node Daemon Set
kubectl describe ds aws-node -n kube-system

Observation: 
1. Reference "Image" value it will be the ECR Registry URL 

# Describe kube-proxy Daemon Set
kubectl describe ds kube-proxy -n kube-system

Observation:
1. Reference "Image" value it will be the ECR Registry URL 

# Describe coredns Deployment
kubectl describe deploy coredns -n kube-system

» Sources « #

Provision an EKS cluster (AWS) using Terraform:

Detailed installation steps for Windows, Linux and MacOS:

» Disclaimer « #

This series draws heavily from Kalyan Reddy Daida’s Terraform on AWS EKS Kubernetes IaC SRE course on Udemy.

His content was a game-changer in helping me understand Terraform.

About the instructor:
🌐 Website📺 YouTube
💼 LinkedIn🗃️ GitHub
My Repos for this section:
iac-terraform-aws-eksAWS EKS Cluster built with Terraform.

ℹ️Shared for educational purposes only, no rights reserved.

Series Overview: AWS EKS

2 parts in this series. View full series →


RobK
Author
RobK
DevOps | Agile | AWS | Ansible | Terraform | GitHub Actions | Linux | Windows