unsupported Kubernetes version (Service: Eks, Status Code: 400, Request ID) but its actually eksctl version!!!!

Firstly, sorry for the misleading error code as this forced even me to scratch my head and spend a few hours in order to figure out the confusion around “CloudFormation” always leading me to the error below:-

2022-01-27 17:56:55 UTC+0530ControlPlaneCREATE_FAILEDResource handler returned message: “unsupported Kubernetes version (Service: Eks, Status Code: 400, Request ID: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa, Extended Request ID: null)” (RequestToken: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaa, HandlerErrorCode: InvalidRequest)

I found it surprising that there wasn’t a single google result revealing the actual cause of this error and by hit and trial method I have come up with a solution. The below document does not tell anything about this issue, as it was last updated a long time back.

So before revealing the solution let me give you details of error log that causing this issue:-

mukesh@CXS-MukeshK:create-eks-cluster$ eksctl create cluster --name poc-cluster --node-type l1.medium --nodes 1 --nodes-min 1 --nodes-max 1 --region us-east-1
[ℹ]  eksctl version 1.11.1
[ℹ]  using region us-east-1
[ℹ]  setting availability zones to [us-east-1d us-east-1c]
[ℹ]  subnets for us-east-1a - public:aaa.aaa.a.a/bb private:aaa.aaa.a.a/bb
[ℹ]  subnets for us-east-1a - public:aaa.aaa.a.a/bb private:aaa.aaa.a.a/bb
[ℹ]  nodegroup "ng-xx" will use "ami-xxx" [AmazonLinux2/1.17]
[ℹ]  using Kubernetes version 1.17
[ℹ]  creating EKS cluster "poc-cluster" in "us-east-1" region with un-managed nodes
[ℹ]  will create 2 separate CloudFormation stacks for cluster itself and the initial nodegroup
[ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=poc-cluster'
[ℹ]  CloudWatch logging will not be enabled for cluster "poc-cluster" in "us-east-1"
[ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-1 --cluster=poc-cluster'
[ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "poc-cluster" in "us-east-1"
[ℹ]  2 sequential tasks: { create cluster control plane "poc-cluster", 2 sequential sub-tasks: { no tasks, create nodegroup "ng-xxx" } }
[ℹ]  building cluster stack "eksctl-poc-cluster-cluster"
[ℹ]  deploying stack "eksctl-poc-cluster-cluster"
[✖]  unexpected status "ROLLBACK_IN_PROGRESS" while waiting for CloudFormation stack "eksctl-poc-cluster-cluster"
[ℹ]  fetching stack events in attempt to troubleshoot the root cause of the failure
[!]  AWS::EC2::SecurityGroup/ClusterSharedNodeSecurityGroup: DELETE_IN_PROGRESS
[!]  AWS::IAM::Role/ServiceRole: DELETE_IN_PROGRESS
[!]  AWS::EC2::SecurityGroup/ControlPlaneSecurityGroup: DELETE_IN_PROGRESS
[!]  AWS::EC2::Subnet/SubnetPublicUSEAST1C: DELETE_IN_PROGRESS
[!]  AWS::IAM::Policy/PolicyELBPermissions: DELETE_IN_PROGRESS
[!]  AWS::EC2::NatGateway/NATGateway: DELETE_IN_PROGRESS
[!]  AWS::EC2::VPCGatewayAttachment/VPCGatewayAttachment: DELETE_IN_PROGRESS
[!]  AWS::EC2::SubnetRouteTableAssociation/RouteTableAssociationPrivateUSEAST1D: DELETE_IN_PROGRESS
[!]  AWS::EC2::RouteTable/PublicRouteTable: DELETE_IN_PROGRESS
[!]  AWS::EC2::SecurityGroupIngress/IngressInterNodeGroupSG: DELETE_IN_PROGRESS
[!]  AWS::IAM::Policy/PolicyCloudWatchMetrics: DELETE_IN_PROGRESS
[!]  AWS::EC2::SubnetRouteTableAssociation/RouteTableAssociationPrivateUSEAST1C: DELETE_IN_PROGRESS
[✖]  AWS::EC2::RouteTable/PublicRouteTable: CREATE_FAILED – "Resource creation cancelled"
[✖]  AWS::EC2::SubnetRouteTableAssociation/RouteTableAssociationPrivateUSEAST1D: CREATE_FAILED – "Resource creation cancelled"
[✖]  AWS::EC2::SubnetRouteTableAssociation/RouteTableAssociationPrivateUSEAST1C: CREATE_FAILED – "Resource creation cancelled"
[✖]  AWS::EC2::NatGateway/NATGateway: CREATE_FAILED – "Resource creation cancelled"
[✖]  AWS::EC2::VPCGatewayAttachment/VPCGatewayAttachment: CREATE_FAILED – "Resource creation cancelled"
[✖]  AWS::EKS::Cluster/ControlPlane: CREATE_FAILED – "Resource handler returned message: \"unsupported Kubernetes version (Service: Eks, Status Code: 400, Request ID: xx-xxx-xx-xx-xxx, Extended Request ID: null)\" (RequestToken: axxx-ccxxx-xxxx-aaaa-xxxxx, HandlerErrorCode: InvalidRequest)"
[!]  1 error(s) occurred and cluster hasn't been created properly, you may wish to check CloudFormation console
[ℹ]  to cleanup resources, run 'eksctl delete cluster --region=us-east-1 --name=poc-cluster'
[✖]  waiting for CloudFormation stack "eksctl-poc-cluster-cluster": ResourceNotReady: failed waiting for successful resource state
Error: failed to create cluster "poc-cluster"

Now when I search through the CloudFormation the story is completely different as it is showing various fails and successes. Needing to figure out what misstep is causing issues in the workflow.

2022-01-27 17:56:57 UTC+0530	eksctl-mk-poc-dev-cluster	ROLLBACK_IN_PROGRESS	The following resource(s) failed to create: [RouteTableAssociationPrivateUSEAST1C, PublicSubnetRoute, ControlPlane, NATGateway]. Rollback requested by user.
2022-01-27 17:56:55 UTC+0530	ControlPlane	CREATE_FAILED	Resource handler returned message: "unsupported Kubernetes version (Service: Eks, Status Code: 400, Request ID: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaa, Extended Request ID: null)" (RequestToken: aaaaaaaa-aaaa-aaaa-aaaa-aaaaaaaaaaaa, HandlerErrorCode: InvalidRequest)

Now let’s hit the solution, I checked the version of eksctl and found

mukesh@CXS-MukeshK:create-eks-cluster$ eksctl version
0.29.2

So visit below URL in order to upgrade eksctl, thankfully this upgrade is straight and simple.

The eksctl command-line utility – Amazon EKS

Below is the result after upgrading to the latest version and a slight question to aws is that is it necessary to release 10 version upgrades per month?

mukesh@aaa-MukeshK:create-eks-cluster$ eksctl version
0.80.0

Now I can create a cluster using the similar cluster config yaml. and here is the result:-

mukesh@CXS-MukeshK:create-eks-cluster$ eksctl create cluster -f mk-control-plane.yaml --write-kubeconfig --set-kubeconfig-context
2022-01-27 19:13:41 [ℹ]  eksctl version 0.80.0
2022-01-27 19:13:41 [ℹ]  using region us-east-1
2022-01-27 19:13:45 [✔]  using existing VPC (vpc-xxxx) and subnets (private:map[us-east-1a:{subnet-xxxxx us-east-1a 11.1.111.1/11} us-east-1b:{subnet-xxxxxx us-east-4g 11.1.1.111/11}] public:map[us-east-1a:{subnet-xxxxx us-east-1a 11.1.1.1/11} us-east-1b:{subnet-aaaaaaaaaaaaaaaa us-east-1b 11.1.11.1/11}])
2022-01-27 19:13:45 [!]  custom VPC/subnets will be used; if resulting cluster doesn't function as expected, make sure to review the configuration of VPC/subnets
2022-01-27 19:13:45 [ℹ]  using Kubernetes version 1.21
2022-01-27 19:13:45 [ℹ]  creating EKS cluster "poc-cluster" in "us-east-1" region with
2022-01-27 19:13:45 [ℹ]  will create a CloudFormation stack for cluster itself and 0 nodegroup stack(s)
2022-01-27 19:13:45 [ℹ]  will create a CloudFormation stack for cluster itself and 0 managed nodegroup stack(s)
2022-01-27 19:13:45 [ℹ]  if you encounter any issues, check CloudFormation console or try 'eksctl utils describe-stacks --region=us-east-1 --cluster=poc-cluster'
2022-01-27 19:13:45 [ℹ]  Kubernetes API endpoint access will use default of {publicAccess=true, privateAccess=false} for cluster "poc-cluster" in "us-east-1"
2022-01-27 19:13:45 [ℹ]  CloudWatch logging will not be enabled for cluster "poc-cluster" in "us-east-1"
2022-01-27 19:13:45 [ℹ]  you can enable it with 'eksctl utils update-cluster-logging --enable-types={SPECIFY-YOUR-LOG-TYPES-HERE (e.g. all)} --region=us-east-1 --cluster=poc-cluster'
2022-01-27 19:13:45 [ℹ]
2 sequential tasks: { create cluster control plane "poc-cluster", wait for control plane to become ready
}
2022-01-27 19:13:45 [ℹ]  building cluster stack "eksctl-poc-cluster-cluster"
2022-01-27 19:13:47 [ℹ]  deploying stack "eksctl-poc-cluster-cluster"
2022-01-27 19:14:17 [ℹ]  waiting for CloudFormation stack "eksctl-poc-cluster-cluster"
2022-01-27 19:14:49 [ℹ]  waiting for CloudFormation stack "eksctl-poc-cluster-cluster"
2022-01-27 19:28:18 [ℹ]  waiting for the control plane availability...
2022-01-27 19:28:19 [✔]  saved kubeconfig as "/home/mukesh/.kube/config"
2022-01-27 19:28:19 [ℹ]  no tasks
2022-01-27 19:28:19 [✔]  all EKS cluster resources for "poc-cluster" have been created
2022-01-27 19:28:26 [ℹ]  kubectl command should work with "/home/mukesh/.kube/config", try 'kubectl get nodes'
2022-01-27 19:28:26 [✔]  EKS cluster "poc-cluster" in "us-east-1" region is ready

Actually the header I wanted to put up was “AWS service apocalypse clause feels real sometimes” but the present heading can help someone like me in frustration about your utility(eksctl) and aws service(CloudFormation) which is meant to create/manage cluster workflow and not generating a single proper message in order to achieve the result.

Leave a Reply

Your email address will not be published. Required fields are marked *