Deploying Kubernetes Operator for Avi on Amazon EKS

24 September 2023

11 min read

Share
Author

Table of Contents

Introduction

This post is part-2 of the series to create and configure Avi controller on AWS and configure with EKS cluster to serve the deployed applications for L4 and L7 services.

cardIconPrevious Step

Step by step process for install and configure Avi controller and it's SE group is discussed in this post ➡️ Avi on AWS: Comprehensive Installation Guide.

In this post, we would continue with the journey and setup AKO with EKS cluster to connect with Avi controller and serve the deployed apps. The Avi Kubernetes Operator (AKO) is an operator which works as an ingress Controller and performs Avi-specific functions in a Kubernetes environment with the accessible Avi Controller. AKO remains in sync with the necessary Kubernetes objects and calls Avi Controller APIs to configure the virtual services.

avi-obj-model.jpg

Pre-requisites

It's assumed that Avi controller is installed and configured to connect with the Kubernetes cluster. We can create an AWS EKS cluster to configure ingress setup.

Accessing EKS from workstation

There are many ways to install EKS and we assume operator would follow either way to build the EKS cluster. The cluster can be accessed from local workstation using the eksctl tool.

aws eks list-clusters | cat 
{
    "clusters": [
        "rajez-eks"
    ]
}
 
eksctl get cluster --region=ap-southeast-2                                                                                                                      <region:ap-southeast-2>
NAME              REGION          EKSCTL CREATED
rajez-eks       ap-southeast-2  False

We can fetch the kubeconfig for the EKS cluster to configure in local workstation

eksctl utils write-kubeconfig --cluster rajez-eks                                                                  
2023-09-27 21:40:01 [✔]  saved kubeconfig as "/Users/user1/.kube/config"

Route53 hosted zone

We would need Route53 hosted Zone with a domain, to support ingress and load balancer(s) created for app deployed in EKS. We have already setup the Avi Cloud and corresponding Service Engine Group with DNS connection type as Route53 in previous post for configuring Avi.

avi-cloud-with-dns.jpg

Install AKO in EKS

Avi Kubernetes Operator (AKO) can be installed on any Kubernetes cluster including clusters from popular clouds ( i.e: AWS, Azure, GCP ).

AKO can be installed on kubernetes with Helm package manager. Latest version of AKO can be found as Avi Vintage site or at VMware docs.

Install AKO

Create the avi-system namespace

kubectl create ns avi-system

Access the helm package for AKO

helm show chart oci://projects.registry.vmware.com/ako/helm-charts/ako --version 1.10.3
 
Pulled: projects.registry.vmware.com/ako/helm-charts/ako:1.10.3
Digest: sha256:1f2f9b89f4166737ed0d0acf4ebfb5853fb6f67b08ca1c8dae48e1dd99d31ab6
apiVersion: v2
appVersion: 1.10.3
description: A helm chart for Avi Kubernetes Operator
name: ako
type: application
version: 1.10.3

In the next step, we have to fetch the values.yaml for AKO Helm package in order to modify the template and prepare for AKO install in configured kubernetes environment.

helm show values oci://projects.registry.vmware.com/ako/helm-charts/ako --version 1.10.3 > values.yaml

We have to configure AKO (via values.yaml) for the Avi controller with following parameters.

  1. ControllerSettings.controllerVersion (Installed Avi controller version)
  2. ControllerSettings.controllerHost (IP address or Hostname of Avi Controller)
  3. ControllerSettings.cloudName (The configured cloud name on Avi controller)
  4. ControllerSettings.serviceEngineGroupName (Name of ServiceEngine Group)
  5. AKOSettings.clusterName (A unique identifier for kubernetes cluster)
  6. avicredentials.username
  7. avicredentials.password
  8. avicredentials.certificateAuthorityData
  9. NetworkSettings.nodeNetworkList (List of Networks and corresponding CIDR mappings for K8s nodes. It's optional when in NodePort mode / static routes are disabled / non vcenter clouds)
  10. NetworkSettings.vipNetworkList (List of Network Names or Subnet [format: subnet-xxx] information for VIP network, multiple networks allowed only for AWS Cloud)
  11. L4Settings.defaultDomain ( Specify a default sub-domain for L4 LB services as per the route53 hosted zone configured as pre-requisites)
  12. L4Settings.autoFQDN (ENUM: default(<svc>.<ns>.<subdomain>), flat (<svc>-<ns>.<subdomain>), "disabled" If the value is disabled then the FQDN generation is disabled)
  13. L7Settings.serviceType (NodePort|ClusterIP|NodePortLocal , default to ClusterIP)
  14. AKOSettings.cniPlugin ( We can leave this field with blank or "" for EKS with default CNI)
  15. layer7Only: false (If this flag is switched on, then AKO will only do layer 7 loadbalancing)
  16. disableStaticRouteSync: true (If the POD networks are reachable from the Avi SE, set this knob to true.)
cardIconFetch ca.crt for Avi

To get the ca.crt to configure https access for AKO to the Avi controller, we can access the controller - Templates > Security > SSL/TLS Certificates to fetch the ca.crt installed and put the raw cert data in values.yaml.

Podcidr reachability

The values.yaml for AKO helm chart contains a flag as disableStaticRouteSync. If the POD networks are reachable from the Avi SE, set this knob to true. In case of EKS, podcidr is a private network and is not reachable to the subnet network by default. Thus, setting the flag as True which throw error in AKO pod log as below :

2023-10-07T03:30:41.898Z        ERROR    lib/dynamic_client.go:353       Error in fetching Pod CIDR from NodeSpec ip-10-0-22-134.ap-southeast-2.compute.internal
2023-10-07T03:30:41.898Z        ERROR    nodes/avi_vrf_translator.go:208 Error in fetching Pod CIDR for ip-10-0-22-134.ap-southeast-2.compute.internal: podcidr not found
2023-10-07T03:30:41.898Z        ERROR    nodes/avi_vrf_translator.go:74  key: Node/ip-10-0-22-134.ap-southeast-2.compute.internal, Error Adding vrf for node ip-10-0-22-134.ap-southeast-2.compute.internal: podcidr not found

Thus, to achieve the connectivity, need to make the EKS node and podcidr network accessible to Avi-SE. We can manually add a rule in EKS security group for access to Avi-SE(s) security group. But that's too much work and every time, SE get's added/changed, the security group which is auto-generated, would need to be fixed in EKS security group.

Thus, the better solution is to configure EKS security group within SE-Group setting in Avi Controller. Avi SE Group config has an option (Data vNIC Custom Security Groups) for custom security group to be associated with Data vNICs for SE instance. We can set EKS security group to let SE get accessible to EKS network and then configure the AKO param as disableStaticRouteSync: true to get direct connectivity of Avi-SE and EKS networks.

avi-se-group-sgs.jpg

AKO Service Type

The default serviceType in AKO setting is ClusterIP, which enable Avi SE to reach app in kubernetes via it's ClusterIP service. As an alternative, we can set AKO serviceType as NodePort, and expect to access app in kubernetes via it's NodePort service, then we won't need podcidr reachability as EKS Node would be reachable to Avi SE.

cardIconNodePortLocal as AKO service config

We can not configure NodePortLocal as the AKO serviceType unless we use EKS with Antrea as the CNI for EKS. The default VPC CNI for EKS, enable to configure ClusterIP or NodePort as the serviceType in AKO.

As the next step, lets install AKO in the EKS cluster.

helm install ako oci://projects.registry.vmware.com/ako/helm-charts/ako --version 1.10.3 -f /path/to/values.yaml --namespace=avi-system
 
NAME: ako
LAST DEPLOYED: Thu Oct  5 18:22:41 2023
NAMESPACE: avi-system
STATUS: deployed
REVISION: 1
TEST SUITE: None

We can verify the installation in kubernetes :

k get all,ing,secrets,cm                                                                                                                                      <region:ap-southeast-2>
NAME        READY   STATUS    RESTARTS   AGE
pod/ako-0   1/1     Running   0          100m
 
NAME                   READY   AGE
statefulset.apps/ako   1/1     100m
 
NAME                               TYPE                 DATA   AGE
secret/avi-secret                  Opaque               3      100m
secret/sh.helm.release.v1.ako.v1   helm.sh/release.v1   1      100m
 
NAME                         DATA   AGE
configmap/avi-k8s-config     37     100m
configmap/kube-root-ca.crt   1      7d21h

We can observe that AKO gets installed as statefulset and we can define replicas to 3 in values.yaml to have high-availability and better performance.

cardIconAKO config in EKS

Once AKO is configured, operator can access and even change the AKO configs (though not recommended) via the configMap avi-k8s-config created in avi-system namespace.

Check for provisioned SE

At this point, we can observe that the Service Engine (SE) have been provisioned and waiting the apps in EKS to create ingress or LB service and assign corresponding Virtual Service in Avi.

avi-se-group.jpg

Avi-controller manages the lifecycle for Service Engine (SE) and provisions corresponding EC2 instances or deletes SE based on the idle time setting for SE Group if not Virtual Service exists.

Avi also creates AWS security group for SE and assigns to corresponding EC2 for Service Engine. avi-se-sg.jpg

Build app to test ingress with Avi

We can follow below steps, to create a simple nginx based webApp to test AKO and Avi setup :

k create deploy webapp --image nginx -n testns                                                             
deployment.apps/webapp created
 
k expose deploy webapp --port 80 -n testns
service/webapp exposed
 
k get all
 
NAME                          READY   STATUS    RESTARTS   AGE
pod/webapp-8474645868-pj4kr   1/1     Running   0          41s
 
NAME             TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
service/webapp   ClusterIP   172.20.124.74   <none>        80/TCP    6s
 
NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/webapp   1/1     1            1           42s
 
NAME                                DESIRED   CURRENT   READY   AGE
replicaset.apps/webapp-8474645868   1         1         1       42s

Ingress resource for app

As we have configured route53 for DNS in Avi, we can visit the route53 to verify the setup and use the domain for application ingress resource creation.

r53-hostedzone.jpg

We can create an ingress resource for this application

k create ing webapp --rule=webapp.kubetest.com/=webapp:80 -n testns
ingress.networking.k8s.io/webapp created

Describe the generated ingress and observe that AKO annotations get added and Avi would start provisioning virtual service for the ingress.

k describe ing webapp -n testns
 
Name:             webapp
Labels:           <none>
Namespace:        testns
Address:          10.0.10.193
Ingress Class:    avi-lb
Default backend:  <default>
Rules:
  Host                 Path  Backends
  ----                 ----  --------
  webapp.kubetest.com
                       /   webapp:80 (10.0.17.54:80)
Annotations:           ako.vmware.com/controller-cluster-uuid: cluster-94731a50-9c61-4fe0-9e26-8fb92758fd0c
                       ako.vmware.com/host-fqdn-vs-uuid-map: {"webapp.kubetest.com":"virtualservice-918dff0e-bb46-4f3c-bc28-4d226204adc0"}
Events:
  Type    Reason  Age   From                     Message
  ----    ------  ----  ----                     -------
  Normal  Synced  48s   avi-kubernetes-operator  Added virtualservice rajez-eks--Shared-L7-4 for webapp.kubetest.com

Verify the ingress resource to check the details along with IP assigned by Avi for the ingress resource.

k get ing webapp -n testns -oyaml
 
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    ako.vmware.com/controller-cluster-uuid: cluster-94731a50-9c61-4fe0-9e26-8fb92758fd0c
    ako.vmware.com/host-fqdn-vs-uuid-map: '{"webapp.kubetest.com":"virtualservice-918dff0e-bb46-4f3c-bc28-4d226204adc0"}'
  creationTimestamp: "2023-10-07T05:03:03Z"
  generation: 1
  name: webapp
  namespace: testns
  resourceVersion: "445729"
  uid: 7106ed2c-fc81-48b6-91ee-c3215af8f239
spec:
  ingressClassName: avi-lb
  rules:
  - host: webapp.kubetest.com
    http:
      paths:
      - backend:
          service:
            name: webapp
            port:
              number: 80
        path: /
        pathType: Exact
status:
  loadBalancer:
    ingress:
    - hostname: webapp.kubetest.com
      ip: 10.0.10.193

We can verify Route53 that Avi has added A record to the hosted zone for the created ingress (webapp.kubetest.com) and exposed on VIP (10.0.10.193) which is managed by Avi. The app domain is accessible across the VPC and external ( if we have used public hosted zone instead of private hosted zone).

r53-l7-records.jpg

We can check Avi for the created virtual service assigned to the ingress for the application.

vs-for-l7-avi.jpg

And we can also verify accessing the application.

$ curl http://webapp.kubetest.com -I
HTTP/1.1 200 OK
Content-Type: text/html
Content-Length: 615
Connection: keep-alive
Server: nginx/1.25.2
Date: Sat, 07 Oct 2023 05:26:12 GMT
Last-Modified: Tue, 15 Aug 2023 17:03:04 GMT
ETag: "64dbafc8-267"
Accept-Ranges: bytes

We can perform further tests using the ingress resource for webapp and the analytics can be accessed on the Avi Dashboard

webapp-analytics.jpg

LoadBalancer service for app

There might be a need to expose an application with LoadBalancer service and access the app via it's external IP. EKS by default provisions an ELB for any LoadBalancer service for such use cases.

Once we have configured Avi controller with AKO and enabled AKO to serve L7 and L4 services for EKS, then an IP from the VIP range would get assigned for the LoadBalancer service of app. Thus provisioning VIP for LB and managing the connectivity with LB service would be managed by Avi. Another good point is that, a DNS subdomain would be created and assigned to Route53 to serve the LoadBalancer service based on the AKO setting (L4Settings.autoFQDN).

To test the setup, we have created another app "testapp" with nginx image in appns namespace in the EKS cluster.

k get all,ing -n appns
NAME                           READY   STATUS    RESTARTS   AGE
pod/testapp-6fb7769966-glk9q   1/1     Running   0          28s
 
NAME              TYPE        CLUSTER-IP    EXTERNAL-IP   PORT(S)   AGE
service/testapp   ClusterIP   172.20.9.20   <none>        80/TCP    11s
 
NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/testapp   1/1     1            1           29s
 
NAME                                 DESIRED   CURRENT   READY   AGE
replicaset.apps/testapp-6fb7769966   1         1         1       29s

Next step, we can edit the service with type as LoadBalancer and let Avi provision the VIP for the app's LB service.

Once, done, we can check the service gets assigned to avi related annotations as below :

k describe svc -n appns testapp
Name:                     testapp
Namespace:                appns
Labels:                   app=testapp
Annotations:              ako.vmware.com/controller-cluster-uuid: cluster-94731a50-9c61-4fe0-9e26-8fb92758fd0c
                          ako.vmware.com/host-fqdn-vs-uuid-map: {"testapp.appns.kubetest.com":"virtualservice-76d1214d-23b1-40c4-acd9-3c73dd8d9a62"}
Selector:                 app=testapp
Type:                     LoadBalancer

We can validate that Avi provisions a virtual service of type L4 and assigns a VIP ( 10.0.5.200 ) which gets exposed on the domain (testapp.appns.kubetest.com) for the LoadBalancer service of the app.

avi-l4-vsip-testapp.jpg

We can check AWS Route53 hosted zone getting assigned with another A-Record for the LoadBalancer service for the app.

avi-l4-r53-arecord.jpg

Avi Dashboard with service interaction to the server (pod) in EKS.

avi-l4-vs-testapp.jpg

And we can also verify accessing the application.

$ curl http://testapp.appns.kubetest.com -I
HTTP/1.1 200 OK
Server: nginx/1.25.3
Date: Sat, 28 Oct 2023 03:16:07 GMT
Content-Type: text/html
Content-Length: 615
Last-Modified: Tue, 24 Oct 2023 13:46:47 GMT
Connection: keep-alive
ETag: "6537cac7-267"
Accept-Ranges: bytes

We can perform further tests using the LoadBalancer service for for testapp and the analytics can be accessed on the Avi Dashboard

testapp-analytics.jpg

Conclusion

In this post, we have observed the step by step process to configure AKO on EKS with Avi controller and also performed tests to create L4 and L7 services. I hope that the steps would help operators to configure Avi on AWS. Thank You ☺️