Relationships between resources in Managed Service for Kubernetes
Kubernetes is an environment for managing containerized applications. Kubernetes provides mechanisms for interacting with clusters that can automate tasks like deploying, scaling, and managing applications in containers.
The main entity in the service is the Kubernetes cluster.
Kubernetes cluster
Kubernetes clusters consist of a master and one or more node groups. The master is responsible for managing the cluster. Containerized user applications are run on nodes.
The service fully controls the master and monitors the status and health of node groups. Users can manage nodes directly and configure clusters using the Yandex Cloud management console and the Managed Service for Kubernetes CLI and API.
Warning
Groups of Kubernetes nodes require internet access for downloading images and components.
Internet access can be provided in the following ways:
- By assigning each node in the group a public IP address.
- Configuring a virtual machine as a NAT instance.
- Enabling NAT to the internet.
Kubernetes clusters in the Yandex Cloud infrastructure use the following resources:
Resource | Amount | Comments |
---|---|---|
Subnet | 2 | Kubernetes reserves ranges of IP addresses to use for pods and services. |
Public IP | N | The number N includes: * One public IP address for the NAT instance. * A public IP address assigned to each node in the group if you use one-to-one NAT technology. |
Master
Masters are components that manage Kubernetes clusters.
They run Kubernetes control processes that include the Kubernetes API server, scheduler, and main resource controllers. The master lifecycle is managed by the service when creating or deleting a Kubernetes cluster. The master is responsible for global solutions that are run on all Kubernetes cluster nodes. These include scheduling workloads (such as containerized applications), managing the lifecycle of workloads, and scaling.
There are two types of masters that differ by their location in availability zones:
-
Zonal: A master created in a subnet in one availability zone.
-
Regional: A master created and distributed in three subnets in each availability zone. If a zone becomes unavailable, the regional master remains functional.
Warning
The internal IP address of a regional master is only available within a single Yandex Virtual Private Cloud cloud network.
Node group
A node group is a group of VMs in a Kubernetes cluster that have the same configuration and run the user's containers.
Configuration
When you create a group of nodes, you can configure the following VM parameters:
-
VM type.
-
Type and number of cores (vCPU).
-
Amount of memory (RAM) and disk space.
-
Kernel parameters:
- Safe kernel parameters are isolated between pods.
- Unsafe parameters affect the operation of the pods and the node as a whole. In Managed Service for Kubernetes, you can't change unsafe kernel parameters unless their names have been explicitly specified when creating the node group.
For more information about kernel parameters, see the Kubernetes documentation.
You can create groups with different configurations in a Kubernetes cluster and place them in different availability zones.
Connecting to group nodes
You can connect to nodes in a group via SSH. Learn more in Connecting to a node via SSH.
Taints and tolerations policies
Taints are special policies assigned to nodes in the group. Taints let you prevent certain pods from running on certain nodes. For example, you can allow the rendering pods to run only on nodes with GPU.
Benefits of taints:
- The policies persist when a node is restarted or replaced with a new one.
- When adding nodes to a group, the policies are assigned to the node automatically.
- The policies are automatically assigned to new nodes when scaling a node group.
You can assign a taint to a node group only at creation.
Warning
Do not confuse Kubernetes node labels (node_labels
) managed by Managed Service for Kubernetes, with taints.
Each taint has three parts:
<key> = <value>:<effect>
List of available taint effects:
NO_SCHEDULE
: Prevent launching of new pods on the group's nodes (the running pods won't stop).PREFER_NO_SCHEDULE
: Avoid launching pods on the group's nodes if other groups have free resources to run these pods.NO_EXECUTE
: Stop pods on the group's nodes, drain the pods across other groups, and prevent new pods from running.
Tolerations: Exceptions from taint policies. Using tolerations, you can allow certain pods to run on nodes, even if the taint policy of the node group prevents this.
For example, if the taint policy for nodes in a group is key1=value1:NoSchedule
, you can add pods to such nodes using tolerations:
apiVersion: v1
kind: Pod
...
spec:
...
tolerations:
- key: "key1"
operator: "Equal"
value: "value1"
effect: "NoSchedule"
Note
System pods are automatically assigned tolerations so they can run on any available node.
For more information about taints and tolerations, see the documentation Kubernetes.
Pod
A pod is a request to run one or more containers on a group node. In a Kubernetes cluster, each pod has a unique IP address so that applications do not conflict when using ports.
Containers are described in pods via JSON or YAML objects.
IP masquerade for pods
If a pod needs access to resources outside the Kubernetes cluster, its IP address will be replaced by the IP address of the node the pod is running on. For this, the cluster uses IP masquerade.
By default, IP masquerade is enabled for the entire range of pod IP addresses.
To implement IP masquerade, the ip-masq-agent
pod is deployed on each cluster node. The settings for this pod are stored in a ConfigMap object called ip-masq-agent
. If you need to disable pod IP masquerade, for example, to access pods over a VPN or Yandex Cloud Interconnect, specify the desired IP ranges in the data.config.nonMasqueradeCIDRs
parameter:
...
data:
config: |+
nonMasqueradeCIDRs:
- <CIDR of pod IP addresses not to masquerade>
...
Service
Service is an abstraction that provides network load balancing functions. Traffic rules are configured for a group of pods united by a set of labels.
By default, a service is only available within a specific Kubernetes cluster, but it can be public and receive requests from outside the Kubernetes cluster.
Namespace
A namespace is an abstraction that logically isolates Kubernetes cluster resources and distributes quotas to them. This is useful for isolating resources of different teams and projects in a single Kubernetes cluster.
Service accounts
Managed Service for Kubernetes clusters use two types of service accounts:
-
Cloud service accounts
These accounts exist on the level of a cloud's individual folders and can be used by Managed Service for Kubernetes and other services.
For more information, see Access management in Managed Service for Kubernetes and Service accounts.
-
Kubernetes service accounts
These accounts exist and are only valid on the level of an individual Managed Service for Kubernetes cluster. They are applied by Kubernetes:
- To authenticate cluster API calls from applications deployed in the cluster.
- To configure access for these applications.
A number of Kubernetes service accounts are automatically created in the
kube-system
namespace when deploying a Managed Service for Kubernetes cluster.Kubernetes creates a token for each of these accounts. This token is used for authentication within the Kubernetes cluster that the account belongs to.
For more information, see the Kubernetes documentation.
Warning
Do not confuse cloud service accounts with Kubernetes service accounts.
In the service documentation, service account refers to a regular cloud service account unless otherwise specified.
Node labels
Node labels, node_labels
are a mechanism for grouping nodes together in Kubernetes. You can use node labels to manage pod distribution across the nodes of a cluster. For more information, see the Kubernetes documentation.
Warning
Don't confuse node group cloud labels (labels
) with Kubernetes node labels (node_labels
) managed by Managed Service for Kubernetes.
We recommend managing all node labels via the Managed Service for Kubernetes API since, by default, when updating or changing a node group, some of the nodes are recreated with different names and some of the old ones are deleted. That's why labels added using the Kubernetes API may be lost. Conversely, using the Kubernetes API to delete labels created via the Managed Service for Kubernetes API has no effect since such labels will be restored.
Node labels can only be set when creating a node group. Each object can be assigned a set of node labels in the form of a key:value
pair. Each key must be unique to an object.
Node label keys can consist of two parts: an optional prefix and a name separated by a /
.
A prefix is an optional part of a key. Prefix requirements:
- It must be a DNS subdomain: a series of DNS tags separated by dots
.
. - The maximum length is 253 characters.
- The last character must be followed by a
/
.
A name is a required part of a key. Naming requirements:
- The length can be up to 63 characters.
- May contain lowercase Latin letters, numbers, hyphens, underscores, and periods.
- The first and the last characters must be a letter or number.
For managing node labels, see Managing Kubernetes cluster node labels.