Yandex Cloud
  • Services
  • Solutions
  • Why Yandex Cloud
  • Blog
  • Pricing
  • Documentation
  • Contact us
Get started
Language / Region
Yandex project
© 2023 Yandex.Cloud LLC
Yandex Managed Service for Kubernetes
  • Comparison with other Yandex Cloud services
  • Getting started
  • Step-by-step instructions
    • All instructions
    • Connecting to a node over SSH
    • Updating Kubernetes
    • Configuring autoscaling
    • Connecting to a cluster
      • Connection method overview
      • Configuring security groups
      • Creating a static configuration file
    • Installing applications from Cloud Marketplace
      • Basics of working with Cloud Marketplace
      • Installing Argo CD
      • Installing Container Storage Interface for S3
      • Installing Crossplane
      • Installing External Secrets Operator
      • Installing Filebeat
      • Installing Fluent Bit
      • Installing Gateway API
      • Installing GitLab Agent
      • Installing GitLab Runner
      • Installing HashiCorp Vault
      • Installing the Application Load Balancer Ingress controller
      • Installing Jaeger
      • Installing Kyverno & Kyverno Policies
      • Installing Metrics Provider
      • Installing NodeLocal DNS
      • Installing Policy Reporter
      • Installing Thumbor
    • Network scenarios
      • Granting access to an app running in a Kubernetes cluster
      • Configuring the Calico network policy controller
      • Configuring the Cilium network policy controller
      • Configuring NodeLocal DNS for the Cilium network policy controller
    • Working with persistent volumes
      • Dynamic volume provisioning
      • Static volume provisioning
      • Managing storage classes
      • Expanding a pod volume
      • Expanding a StatefulSet controller volume
      • Mounting a volume in Block mode
      • Integration with Object Storage
    • Managing a Kubernetes cluster
      • Getting information about a Kubernetes cluster
      • Creating a Kubernetes cluster
      • Editing a Kubernetes cluster
      • Creating a namespace in a Kubernetes cluster
      • Deleting a Kubernetes cluster
    • Managing a node group
      • Information about existing node groups
      • Creating a node group
      • Connecting to a node over SSH
      • Configuring autoscaling
      • Changing a node group
      • Managing Kubernetes cluster node labels
      • Deleting a node group
    • Connecting external nodes to the cluster
  • Practical guidelines
    • All tutorials
    • Creating a new Kubernetes project in Yandex Cloud
    • Integration with Container Registry
    • Running workloads with GPUs
    • Installing an NGINX Ingress controller with Let's Encrypt®
    • Installing an NGINX Ingress controller with a certificate from Certificate Manager
    • Backups to Object Storage
    • Horizontal application scaling in a cluster
    • Vertical application scaling in a cluster
    • Deploying and load testing a gRPC service with scaling
    • Working with snapshots
    • Integrating into a corporate DNS zone
    • Automatic DNS scaling by cluster size
    • Setting up local DNS caching
    • Checking DNS Challenge for Let's Encrypt® certificates
    • Monitoring a cluster using Prometheus and Grafana
    • Continuous deployment of containerized applications using GitLab
    • Using Cloud Marketplace products
      • Integrating with Argo CD
      • Integration with Crossplane
      • Syncing with Yandex Lockbox secrets
      • Configuring Fluent Bit for Cloud Logging
      • Setting up Gateway API
      • Configuring the Application Load Balancer Ingress controller
      • Using Jaeger to trace requests in Managed Service for YDB
      • Using Metrics Provider to stream metrics
  • Concepts
    • Relationships between service resources
    • Release channels and updates
    • Encrypting secrets
    • Using Kubernetes API objects
      • Volumes
      • Service
    • Node groups
      • Cluster autoscaler
      • Evicting pods from nodes
      • Dynamic resource allocation for a node
      • Node groups with GPUs
    • Network in Managed Service for Kubernetes
    • External cluster nodes
    • Network settings and cluster policies
    • Automatic scaling
    • Quotas and limits
    • Recommendations for using Managed Service for Kubernetes
  • Access management
  • Pricing policy
  • API reference
    • Authentication in the API
    • gRPC
      • Overview
      • ClusterService
      • NodeGroupService
      • VersionService
      • OperationService
    • REST
      • Overview
      • Cluster
        • Overview
        • create
        • delete
        • get
        • list
        • listNodeGroups
        • listNodes
        • listOperations
        • start
        • stop
        • update
      • NodeGroup
        • Overview
        • create
        • delete
        • get
        • list
        • listNodes
        • listOperations
        • update
      • Version
        • Overview
        • list
  • Questions and answers
    • General questions
    • Data storage
    • Configuring and updating
    • Automatic scaling
    • Resources
    • Logs
    • All questions on one page
  1. Practical guidelines
  2. Using Cloud Marketplace products
  3. Configuring the Application Load Balancer Ingress controller

Configuring the Application Load Balancer Ingress controller

Written by
Yandex Cloud
  • Before you start
  • Create a namespace for the Application Load Balancer Ingress controller
  • Install the Application Load Balancer Ingress controller
  • Create an Ingress controller and test applications
  • Make sure the Kubernetes cluster applications are accessible through Application Load Balancer
  • Delete the resources you created

Yandex Application Load Balancer is designed for load balancing and traffic distribution across applications. To use it for managing incoming traffic of applications running in a Managed Service for Kubernetes cluster, you need an Ingress controller.

To set up access to the applications running in your cluster via Application Load Balancer:

  1. Create a namespace for the Application Load Balancer Ingress controller.
  2. Install the Application Load Balancer Ingress controller.
  3. Create an Ingress controller and test applications.
  4. Make sure the Kubernetes cluster applications are accessible through Application Load Balancer.

Before you start

  1. If you don't have the Yandex Cloud command line interface yet, install and initialize it.

    The folder specified in the CLI profile is used by default. You can specify a different folder using the --folder-name or --folder-id parameter.

  2. Create a service account for the Ingress controller to run.

    1. Assign it the following roles:

      • alb.editor: To create the required resources.
      • vpc.publicAdmin: To manage external connectivity.
      • certificate-manager.certificates.downloader: To use certificates registered in Yandex Certificate Manager.
      • compute.viewer: To use Managed Service for Kubernetes cluster nodes in target groups of the load balancer.
    2. Create an authorized key for the service account and save it to sa-key.json:

      yc iam key create \
        --service-account-name <name of the Ingress controller service account> \
        --output sa-key.json
      
  3. Register a public domain zone and delegate your domain.

  4. If you already have a certificate for the domain zone, add information about it to Certificate Manager. Or create a new Let's Encrypt® certificate.

  5. Create a Managed Service for Kubernetes cluster with the Public address setting : Auto.

  6. Create a node group in any suitable configuration.

  7. Configure cluster security groups and node groups. A security group for a group of nodes must allow incoming TCP traffic from the load balancer subnets on ports 10501 and 10502 or from the load balancer security group (you will need to specify the subnets and the group to create an Ingress controller later).

  8. Install the Helm package manager, version 3.7.0 or higher.

  9. Install the kubectl and configure it to work with the created cluster.

  10. Check that you can connect to the cluster using kubectl:

    kubectl cluster-info
    

Create a namespace for the Application Load Balancer Ingress controller

To create a namespace, run the following command:

kubectl create namespace yc-alb-ingress

Install the Application Load Balancer Ingress controller

To install a Helm chart with the Ingress controller, run the commands:

export HELM_EXPERIMENTAL_OCI=1 && \
cat sa-key.json | helm registry login cr.yandex --username 'json_key' --password-stdin && \
helm pull oci://cr.yandex/yc/yc-alb-ingress-controller-chart \
  --version=v0.1.9 \
  --untar && \
helm install \
  --namespace yc-alb-ingress \
  --set folderId=<folder ID> \
  --set clusterId=<cluster ID> \
  --set-file saKeySecretKey=sa-key.json \
  yc-alb-ingress-controller ./yc-alb-ingress-controller-chart/

You can find out the cluster ID in a list of clusters in the folder.

Create an Ingress controller and test applications

The Ingress controller's workload can include Kubernetes services, target groups in Application Load Balancer, or buckets in Yandex Object Storage.

Before getting started, get the ID of the previously added TLS certificate:

yc certificate-manager certificate list

Result:

+------+--------+---------------+---------------------+----------+--------+
|  ID  |  NAME  |    DOMAINS    |      NOT AFTER      |   TYPE   | STATUS |
+------+--------+---------------+---------------------+----------+--------+
| <ID> | <name> | <domain name> | 2022-01-06 17:19:37 | IMPORTED | ISSUED |
+------+--------+---------------+---------------------+----------+--------+
Ingress controller for Kubernetes services
Ingress controller for a backend group
  1. In a separate folder, create a file named ingress.yaml and specify the previously delegated domain name, certificate ID, and settings for Application Load Balancer in it:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: alb-demo-tls
      annotations:
        ingress.alb.yc.io/subnets: <list of subnet IDs>
        ingress.alb.yc.io/security-groups: <list of security group IDs>
        ingress.alb.yc.io/external-ipv4-address: <auto or static IP address>
        ingress.alb.yc.io/group-name: <Ingress group name>
    spec:
      tls:
        - hosts:
            - <domain name>
          secretName: yc-certmgr-cert-id-<TLS certificate ID>
      rules:
        - host: <domain name>
          http:
            paths:
              - path: /app1
                pathType: Prefix
                backend:
                  service:
                    name: alb-demo-1
                    port:
                      number: 80
              - path: /app2
                pathType: Prefix
                backend:
                  service:
                    name: alb-demo-2
                    port:
                      number: 80
              - pathType: Prefix
                path: "/"
                backend:
                  service:
                    name: alb-demo-2
                    port:
                      name: http
    

    Where:

    • ingress.alb.yc.io/subnets: One or more subnets that Application Load Balancer is going to work with.
    • ingress.alb.yc.io/security-groups: One or more security groups for Application Load Balancer. If the parameter is omitted, the default security group is used. At least one of the security groups must allow outgoing TCP connections to ports 10501 and 10502 in the node group subnet or security group.
    • ingress.alb.yc.io/external-ipv4-address: Provide public online access to Application Load Balancer. Enter the IP address you obtained or use the auto value to obtain a new IP address.
    • ingress.alb.yc.io/group-name: Grouping of Kubernetes Ingress resources, with each group served by a separate Application Load Balancer instance. Enter the name of the group.

    (Optional) Enter the advanced settings for the controller:

    • ingress.alb.yc.io/internal-ipv4-address: Provide internal access to Application Load Balancer. Enter the internal IP address or use auto to obtain the IP address automatically.

      Note

      You can only use one type of access to Application Load Balancer at a time: ingress.alb.yc.io/external-ipv4-address or ingress.alb.yc.io/internal-ipv4-address.

    • ingress.alb.yc.io/internal-alb-subnet: The subnet for hosting the Application Load Balancer internal IP address. This parameter is required if the ingress.alb.yc.io/internal-ipv4-address parameter is selected.

    • ingress.alb.yc.io/protocol: The connection protocol used by the load balancer and the backends:

      • http: HTTP/1.1. Default value.
      • http2: HTTP/2.
      • grpc: gRPC.
    • ingress.alb.yc.io/transport-security: The encryption protocol used by the connections between the load balancer and the backends:

      • tls: TLS with no certificate challenge.

      If no annotation is specified, the load balancer connects to the backends without encryption.

    • ingress.alb.yc.io/prefix-rewrite: Replace the path for the specified value.

    • ingress.alb.yc.io/upgrade-types: Valid values for the Upgrade HTTP header, for example, websocket.

    • ingress.alb.yc.io/request-timeout: The maximum period for which the connection can be established.

    • ingress.alb.yc.io/idle-timeout: The maximum connection keep-alive time with no data to transmit.

      Values for request-timeout and idle-timeout must be specified with units of measurement, for example: 300ms, 1.5h. Acceptable units of measurement:

      • ns: Nanoseconds.
      • us: Microseconds.
      • ms: Milliseconds.
      • s: Seconds.
      • m: Minutes.
      • h: Hours.

      Note

      The settings only apply to the hosts of the given controller rather than the entire Ingress group.

  2. In the same folder, create demo-app-1.yaml and demo-app-2.yaml application files:

    demo-app1.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: alb-demo-1
    data:
      nginx.conf: |
        worker_processes auto;
        events {
        }
        http {
          server {
            listen 80 ;
            location = /_healthz {
              add_header Content-Type text/plain;
              return 200 'ok';
            }
            location / {
              add_header Content-Type text/plain;
              return 200 'Index';
            }
            location = /app1 {
              add_header Content-Type text/plain;
              return 200 'This is APP#1';
            }
          }
        }
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: alb-demo-1
      labels:
        app: alb-demo-1
        version: v1
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: alb-demo-1
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxSurge: 1
          maxUnavailable: 0
      template:
        metadata:
          labels:
            app: alb-demo-1
            version: v1
        spec:
          terminationGracePeriodSeconds: 5
          volumes:
            - name: alb-demo-1
              configMap:
                name: alb-demo-1
          containers:
            - name: alb-demo-1
              image: nginx:latest
              ports:
                - name: http
                  containerPort: 80
              livenessProbe:
                httpGet:
                  path: /_healthz
                  port: 80
                initialDelaySeconds: 3
                timeoutSeconds: 2
                failureThreshold: 2
              volumeMounts:
                - name: alb-demo-1
                  mountPath: /etc/nginx
                  readOnly: true
              resources:
                limits:
                  cpu: 250m
                  memory: 128Mi
                requests:
                  cpu: 100m
                  memory: 64Mi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: alb-demo-1
    spec:
      selector:
        app: alb-demo-1
      type: NodePort
      ports:
        - name: http
          port: 80
          targetPort: 80
          protocol: TCP
          nodePort: 30081
    
    demo-app2.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: alb-demo-2
    data:
      nginx.conf: |
        worker_processes auto;
        events {
        }
        http {
          server {
            listen 80 ;
            location = /_healthz {
              add_header Content-Type text/plain;
              return 200 'ok';
            }
            location / {
              add_header Content-Type text/plain;
              return 200 'Add app#';
            }
            location = /app2 {
              add_header Content-Type text/plain;
              return 200 'This is APP#2';
            }
          }
        }
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: alb-demo-2
      labels:
        app: alb-demo-2
        version: v1
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: alb-demo-2
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxSurge: 1
          maxUnavailable: 0
      template:
        metadata:
          labels:
            app: alb-demo-2
            version: v1
        spec:
          terminationGracePeriodSeconds: 5
          volumes:
            - name: alb-demo-2
              configMap:
                name: alb-demo-2
          containers:
            - name: alb-demo-2
              image: nginx:latest
              ports:
                - name: http
                  containerPort: 80
              livenessProbe:
                httpGet:
                  path: /_healthz
                  port: 80
                initialDelaySeconds: 3
                timeoutSeconds: 2
                failureThreshold: 2
              volumeMounts:
                - name: alb-demo-2
                  mountPath: /etc/nginx
                  readOnly: true
              resources:
                limits:
                  cpu: 250m
                  memory: 128Mi
                requests:
                  cpu: 100m
                  memory: 64Mi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: alb-demo-2
    spec:
      selector:
        app: alb-demo-2
      type: NodePort
      ports:
        - name: http
          port: 80
          targetPort: 80
          protocol: TCP
          nodePort: 30082
    
  3. Create an Ingress controller and applications:

    kubectl apply -f .
    
  4. Wait until the Ingress controller is created and assigned a public IP address. This may take several minutes:

    kubectl get ingress alb-demo-tls
    

    The expected result is a non-empty value in the ADDRESS field for the created Ingress controller:

    NAME          CLASS   HOSTS          ADDRESS       PORTS    AGE
    alb-demo-tls  <none>  <domain name>  <IP address>  80, 443  15h
    

    Based on the Ingress controller configuration, an L7 load balancer is deployed automatically.

To set up a backend group, use CustomResourceDefinition HttpBackendGroup. As a backend, you can use a target group in the Application Load Balancer or a bucket in Object Storage.

To configure Application Load Balancer to work with a backend group:

  1. Create a backend group with a bucket:

    1. Create a public bucket in Object Storage.
    2. Configure the website homepage and error page.
  2. Create a configuration file named demo-app-1.yaml for your application:

    demo-app1.yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: alb-demo-1
    data:
      nginx.conf: |
        worker_processes auto;
        events {
        }
        http {
          server {
            listen 80 ;
            location = /_healthz {
              add_header Content-Type text/plain;
              return 200 'ok';
            }
            location / {
              add_header Content-Type text/plain;
              return 200 'Index';
            }
            location = /app1 {
              add_header Content-Type text/plain;
              return 200 'This is APP#1';
            }
          }
        }
    ---
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: alb-demo-1
      labels:
        app: alb-demo-1
        version: v1
    spec:
      replicas: 2
      selector:
        matchLabels:
          app: alb-demo-1
      strategy:
        type: RollingUpdate
        rollingUpdate:
          maxSurge: 1
          maxUnavailable: 0
      template:
        metadata:
          labels:
            app: alb-demo-1
            version: v1
        spec:
          terminationGracePeriodSeconds: 5
          volumes:
            - name: alb-demo-1
              configMap:
                name: alb-demo-1
          containers:
            - name: alb-demo-1
              image: nginx:latest
              ports:
                - name: http
                  containerPort: 80
              livenessProbe:
                httpGet:
                  path: /_healthz
                  port: 80
                initialDelaySeconds: 3
                timeoutSeconds: 2
                failureThreshold: 2
              volumeMounts:
                - name: alb-demo-1
                  mountPath: /etc/nginx
                  readOnly: true
              resources:
                limits:
                  cpu: 250m
                  memory: 128Mi
                requests:
                  cpu: 100m
                  memory: 64Mi
    ---
    apiVersion: v1
    kind: Service
    metadata:
      name: alb-demo-1
    spec:
      selector:
        app: alb-demo-1
      type: NodePort
      ports:
        - name: http
          port: 80
          targetPort: 80
          protocol: TCP
          nodePort: 30081
    
  3. In a separate directory, create a file called http-group.yaml containing the HttpBackendGroup object settings:

    apiVersion: alb.yc.io/v1alpha1
    kind: HttpBackendGroup
    metadata:
      name: example-backend-group
    spec:
      backends: # The list of backends.
        - name: alb-demo-1
          weight: 70 # The relative weight of the backend when distributing traffic. The load will be distributed proportionally to the weight of other backends in the group. Be sure to specify the weight even if you have only one backend in the group.
          service:
             name: alb-demo-1
             port:
               number: 80
        - name: bucket-backend
          weight: 30
          storageBucket:
            name: <bucket name>
    

    (Optional) Enter the advanced settings for the controller:

    • spec.backends.useHttp2: The mode using the HTTP/2 protocol.
    • spec.backends.tls: A certificate from the certificate authority that the load balancer will trust when establishing a secure connection with backend endpoints. Specify the certificate contents in the trustedCa field as open text.

    For more information, see Backend groups.

  4. Create a file named ingress-http.yaml and specify the previously delegated domain name, certificate ID, and settings for Application Load Balancer in it:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: alb-demo-tls
      annotations:
        ingress.alb.yc.io/subnets: <list of subnet IDs> # One or more subnets that Application Load Balancer is going to work with.
        ingress.alb.yc.io/security-groups: <list of security group IDs> # One or more security groups for Application Load Balancer. If the parameter is omitted, the default security group is used.
        ingress.alb.yc.io/external-ipv4-address: <auto or static IP address> # Provide public online access to Application Load Balancer. Enter the previously obtained IP address or use `auto` to obtain a new IP address automatically.
        ingress.alb.yc.io/group-name: <Ingress group name> # Grouping of Kubernetes Ingress resources, with each group served by a separate Application Load Balancer instance.
    spec:
      tls:
        - hosts:
          - <domain name>
          secretName: yc-certmgr-cert-id-<TLS certificate ID>
      rules:
        - host: <domain name>
          http:
            paths:
              - path: /app1
                pathType: Exact
                backend:
                  resource:
                    apiGroup: alb.yc.io
                    kind: HttpBackendGroup
                    name: example-backend-group
    

    (Optional) Enter the advanced settings for the controller:

    • ingress.alb.yc.io/internal-ipv4-address: Provide internal access to Application Load Balancer. Enter the internal IP address or use auto to obtain the IP address automatically.

      Note

      You can only use one type of access to Application Load Balancer at a time: ingress.alb.yc.io/external-ipv4-address or ingress.alb.yc.io/internal-ipv4-address.

    • ingress.alb.yc.io/internal-alb-subnet: The subnet for hosting the Application Load Balancer internal IP address. This parameter is required if the ingress.alb.yc.io/internal-ipv4-address parameter is selected.

    • ingress.alb.yc.io/protocol: The connection protocol used by the load balancer and the backends:

      • http: HTTP/1.1. Default value.
      • http2: HTTP/2.
      • grpc: gRPC.
    • ingress.alb.yc.io/prefix-rewrite: Replace the path for the specified value.

    • ingress.alb.yc.io/upgrade-types: Valid values for the Upgrade HTTP header, for example, websocket.

    • ingress.alb.yc.io/request-timeout: The maximum period for which the connection can be established.

    • ingress.alb.yc.io/idle-timeout: The maximum connection keep-alive time with no data to transmit.

      Values for request-timeout and idle-timeout must be specified with units of measurement, for example: 300ms, 1.5h. Acceptable units of measurement:

      • ns: Nanoseconds.
      • us: Microseconds.
      • ms: Milliseconds.
      • s: Seconds.
      • m: Minutes.
      • h: Hours.

    Note

    The settings only apply to the hosts of the given controller rather than the entire Ingress group.

  5. Create an Ingress controller, an HttpBackendGroup object, and a Kubernetes app:

    kubectl apply -f .
    
  6. Wait until the Ingress controller is created and assigned a public IP address. This may take several minutes:

    kubectl get ingress alb-demo-tls
    

    The expected result is a non-empty value in the ADDRESS field for the created Ingress controller:

    NAME          CLASS   HOSTS          ADDRESS       PORTS    AGE
    alb-demo-tls  <none>  <domain name>  <IP address>  80, 443  15h
    

    Based on the Ingress controller configuration, an L7 load balancer will be automatically deployed.

Make sure the Kubernetes cluster applications are accessible through Application Load Balancer

  1. Add an A record to your domain's zone. In the Value field, specify the public IP address of the Ingress controller.

  2. Configure the load balancer's security groups.

  3. Test Application Load Balancer:

    Kubernetes services
    Backend group

    Open the application URIs in the browser:

    https://<your domain>/app1
    https://<your domain>/app2
    

    Make sure the applications are accessible via Application Load Balancer and return pages with This is APP#1 and This is APP#2 text, respectively.

    Open the application URI in the browser:

    https://<your domain>/app1
    

    Make sure that the target resources are accessible via Application Load Balancer.

Delete the resources you created

If you no longer need these resources, delete them:

  1. Delete the cluster in Managed Service for Kubernetes.
  2. Delete target groups from Application Load Balancer.
  3. Delete the bucket from the Object Storage.

Was the article helpful?

Language / Region
Yandex project
© 2023 Yandex.Cloud LLC
In this article:
  • Before you start
  • Create a namespace for the Application Load Balancer Ingress controller
  • Install the Application Load Balancer Ingress controller
  • Create an Ingress controller and test applications
  • Make sure the Kubernetes cluster applications are accessible through Application Load Balancer
  • Delete the resources you created