Creating EKS cluster with Fargate profile

Procedure

  1. Configure shell variables in the Linux terminal session that will be referenced in the subsequent commands.
  2. Choose a name for the EKS cluster, the AWS region in which to provision it, and the version of Kubernetes control plane to use. For example, cluster name link-eks-fargate, the region us-east-2 and the Kubernetes version 1.30( or the latest Kubernetes version supported by EKS):
  3. Run the eksctl tool to provision the cluster:

    The --fargate option is used to set up the default Fargate profile named fp-default for the cluster. The --fargate option will also ensure that the EFS CSI driver is installed in the cluster. The provisioning process for the cluster takes around 10 minutes.

    In this example, the cluster will be deployed to a new Virtual Private Cloud (VPC) network with the name eksctl-link-eks-fargate-cluster/VPC. The VPC will be created as a regional VPC (instead of zonal) and will span three availability zones (AZs) in the us-east-2 region: us-east-2c, us-east-2b and us-east-2a. A pair of private and public subnets will be created in each AZ.

    When the command finishes, it automatically adds a new context to the ~/.kube/config file and sets it as the default context. From this point, all kubectl and helm commands will run on the new cluster.

  4. Run the following command to check the nodes created in the cluster:
    You should see two nodes provisioned by default, for example:
  5. Run the following command:
    You will notice that no EC2 instances are provisioned.

    This is because Fargate nodes are serverless and fully managed by AWS. Unlike traditional EKS worker nodes that run on EC2 instances, Fargate abstracts the underlying infrastructure.

  6. Run the following command to verify that the EFS CSI driver is installed automatically:
    You should see the EFS CSI (efs.csi.aws.com) driver listed, for example: