Configuring security groups
Security groups operate on the principle that unless traffic is allowed, it is blocked. For a cluster to run, you need to create rules in its security groups to allow:
- Service traffic within the cluster.
- Connections to services from the internet.
- Connections to nodes over SSH.
- Kubernetes API access.
Note
We recommend creating an independent security group for each of the mentioned sets of rules.
Note
We recommend creating an independent security group for each of the mentioned sets of rules.
You can set more detailed rules for security groups, such as allowing traffic in only specific subnets.
Security groups must be correctly configured for all subnets that will host the cluster. This determines the performance and availability of the cluster and the services running there.
Prior to editing security groups or the settings of any included rules, make sure this is not going to disrupt the cluster or its node groups.
Alert
Do not delete the security groups bound to a running cluster or node group as this might result in disruptions in their operation and data loss.
Creating rules for service traffic
Warning
Rules for service traffic are required for a regional cluster to work.
For the cluster to run properly, create rules both for the inbound and outgoing traffic, and apply them to the cluster and node groups:
- Add rules for incoming traffic.
- For a network load balancer:
- Port range:
0-65535
. - Protocol:
TCP
. - Source type:
Load balancer health checks
.
- Port range:
- To transfer service traffic between the master and nodes:
- Port range:
0-65535
. - Protocol:
Any
. - Source type:
Security group
. - Security group: Current (
Self
).
- Port range:
- To transfer traffic between pods and services:
- Port range:
0-65535
. - Protocol:
Any
. - Source type:
CIDR
. - Destination: Specify the IP address ranges of the subnets created along with the cluster, such as:
10.96.0.0/16
.10.112.0.0/16
.
- Port range:
- To test the nodes using ICMP requests from the subnets within Yandex Cloud:
- Protocol:
ICMP
. - Source type:
CIDR
. - Destination: IP address ranges of the subnets within Yandex Cloud from which you'll perform cluster diagnostics, such as:
10.0.0.0/8
.192.168.0.0/16
.172.16.0.0/12
.
- Protocol:
- For a network load balancer:
- Add a rule for outgoing traffic that allows cluster hosts to connect to external resources, for example, to download images from Docker Hub or work with Yandex Object Storage:
- Port range:
0-65535
. - Protocol:
Any
. - Source type:
CIDR
. - Destination:
0.0.0.0/0
.
- Port range:
Creating a rule for connecting to services from the internet
To be sure that the services running on nodes are accessible from the internet and subnets within Yandex Cloud, create a rule for the incoming traffic and apply it to the node group:
- Port range:
30000-32767
. - Protocol:
TCP
. - Source type:
CIDR
. - Destination:
0.0.0.0/0
.
Creating a rule for connecting to nodes via SSH
To access the nodes via SSH, create a rule for the incoming traffic and apply it to the node group:
- Port:
22
. - Protocol:
TCP
. - Source type:
CIDR
. - Destination: IP address ranges of the subnets within Yandex Cloud and public IP addresses of computers on the internet, for example:
10.0.0.0/8
.192.168.0.0/16
.172.16.0.0/12
.85.32.32.22/32
.
Creating rules to access the Kubernetes API
To access the Kubernetes API and manage clusters using kubectl
and other utilities, you need rules that allow connections to the master via ports 6443
and 443
. Create two rules for the incoming traffic, one rule per port, and apply them to the cluster:
- Ports:
443
,6443
. - Protocol:
TCP
. - Source type:
CIDR
. - Destination: Specify the IP address range of the subnets from which you'll manage the cluster, such as:
85.23.23.22/32
: For the external network.192.168.0.0/24
: For the internal network.
Examples
For example, you need to create rules for an existing Kubernetes cluster:
- With the zonal master located in the
ru-central1-a
availability zone. - With the
worker-nodes-c
node group. - With the address range for pods and services:
10.96.0.0/16
and10.112.0.0/16
. - With access to services:
- From the load balancer's address range
198.18.235.0/24
and198.18.248.0/24
. - From the internal subnets
172.16.0.0/12
,10.0.0.0/8
, and192.168.0.0/16
for the ICMP protocol. - From the internet from any address (
0.0.0.0/0
) to a range of NodePorts (30000-32767
).
- From the load balancer's address range
- With access to nodes from the internet from the address
85.32.32.22/32
to port22
. - With access to the Kubernetes API from an external subnet from an address range
203.0.113.0/24
via ports443
and6443
.
Four security groups are created:
k8s-main-sg
: Rules for service traffic.k8s-public-services
: Rules for connecting to nodes from the internet.k8s-nodes-ssh-access
: Rules for connecting to nodes over SSH.k8s-master-whitelist
: Rules for accessing the cluster API.
terraform {
required_providers {
yandex = {
source = "yandex-cloud/yandex"
}
}
}
provider "yandex" {
token = "<service account OAuth or static key>"
cloud_id = "<cloud ID>"
folder_id = "<folder ID>"
zone = "<availability zone>"
}
resource "yandex_vpc_security_group" "k8s-main-sg" {
name = "k8s-main-sg"
description = "Group rules ensure the basic performance of the cluster. Apply it to the cluster and node groups."
network_id = "<cloud network ID>"
ingress {
protocol = "TCP"
description = "Rule allows availability checks from load balancer's address range. It is required for the operation of a fault-tolerant cluster and load balancer services."
predefined_target = "loadbalancer_healthchecks"
from_port = 0
to_port = 65535
}
ingress {
protocol = "ANY"
description = "Rule allows master-node and node-node communication inside a security group."
predefined_target = "self_security_group"
from_port = 0
to_port = 65535
}
ingress {
protocol = "ANY"
description = "Rule allows pod-pod and service-service communication. Specify the subnets of your cluster and services."
v4_cidr_blocks = ["10.96.0.0/16", "10.112.0.0/16"]
from_port = 0
to_port = 65535
}
ingress {
protocol = "ICMP"
description = "Rule allows debugging ICMP packets from internal subnets."
v4_cidr_blocks = ["172.16.0.0/12", "10.0.0.0/8", "192.168.0.0/16"]
}
egress {
protocol = "ANY"
description = "Rule allows all outgoing traffic. Nodes can connect to Yandex Container Registry, Object Storage, Docker Hub, and so on."
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 0
to_port = 65535
}
}
resource "yandex_vpc_security_group" "k8s-public-services" {
name = "k8s-public-services"
description = "Group rules allow connections to services from the internet. Apply the rules only for node groups."
network_id = "<cloud network ID>"
ingress {
protocol = "TCP"
description = "Rule allows incoming traffic from the internet to the NodePort port range. Add ports or change existing ones to the required ports."
v4_cidr_blocks = ["0.0.0.0/0"]
from_port = 30000
to_port = 32767
}
}
resource "yandex_vpc_security_group" "k8s-nodes-ssh-access" {
name = "k8s-nodes-ssh-access"
description = "Group rules allow connections to cluster nodes over SSH. Apply the rules only for node groups."
network_id = "<cloud network ID>"
ingress {
protocol = "TCP"
description = "Rule allows connections to nodes over SSH from specified IPs."
v4_cidr_blocks = ["85.32.32.22/32"]
port = 22
}
}
resource "yandex_vpc_security_group" "k8s-master-whitelist" {
name = "k8s-master-whitelist"
description = "Group rules allow access to the Kubernetes API from the internet. Apply the rules to the cluster only."
network_id = "<cloud network ID>"
ingress {
protocol = "TCP"
description = "Rule allows connections to the Kubernetes API via port 6443 from a specified network."
v4_cidr_blocks = ["203.0.113.0/24"]
port = 6443
}
ingress {
protocol = "TCP"
description = "Rule allows connections to the Kubernetes API via port 443 from a specified network."
v4_cidr_blocks = ["203.0.113.0/24"]
port = 443
}
}
resource "yandex_kubernetes_cluster" "k8s-cluster" {
name = "k8s-cluster"
...
master {
version = "1.20"
zonal {
zone = "ru-central1-a"
subnet_id = <cloud subnet ID>
}
security_group_ids = [
yandex_vpc_security_group.k8s-main-sg.id,
yandex_vpc_security_group.k8s-master-whitelist.id
]
...
}
...
}
resource "yandex_kubernetes_node_group" "worker-nodes-c" {
cluster_id = yandex_kubernetes_cluster.k8s-cluster.id
name = "worker-nodes-c"
version = "1.20"
...
instance_template {
platform_id = "standard-v3"
network_interface {
nat = true
subnet_ids = [<cloud subnet ID>]
security_group_ids = [
yandex_vpc_security_group.k8s-main-sg.id,
yandex_vpc_security_group.k8s-nodes-ssh-access.id,
yandex_vpc_security_group.k8s-public-services.id
]
...
}
...
}
}