How to Deploy a Kubernetes 1.28 Single-Control-Plane Cluster on OpenEuler 24.03 with kubeadm, Docker, and Calico

This guide focuses on building a Kubernetes 1.28 single-control-plane cluster on OpenEuler 24.03, addressing dependency compatibility, slow image pulls, and CNI deployment challenges in domestic Linux environments. The core workflow includes kubeadm initialization, cri-dockerd integration, and Calico networking. Keywords: OpenEuler, Kubernetes, single control plane.

The technical specification snapshot is straightforward

Parameter Value
Operating System OpenEuler 24.03
Kubernetes Version v1.28.15
Container Runtime Docker 28.0.3 + cri-dockerd 0.3.8
Network Plugin Calico v3.26.0
Visualization Component Kubernetes Dashboard v2.7.0
Monitoring Component Metrics Server v0.7.x
Cluster Topology 1 Control Plane + 2 Workers
Protocols and Interfaces HTTPS, CNI, CRI, CSI
GitHub Stars Not provided in the source
Core Dependencies kubeadm, kubelet, kubectl, docker-ce, cri-dockerd

This single-control-plane design fits labs and small validation environments

The original workflow is clear: prepare system initialization on three OpenEuler 24.03 hosts, install Docker and cri-dockerd, and then use kubeadm to bootstrap the control plane and join worker nodes.

The node plan is as follows: the control plane node is 192.168.100.110, and the two worker nodes are 192.168.100.111 and 192.168.100.112. This layout works well for training, testing, proofs of concept, and offline validation. It is not suitable for production-grade high availability.

The cluster roles are easy to visualize

# Control plane node
192.168.100.110  k8s-master

# Worker nodes
192.168.100.111  k8s-node1
192.168.100.112  k8s-node2

This configuration defines the node boundaries of a minimal but functional Kubernetes cluster.

Pre-deployment system initialization must be done correctly the first time

Kubernetes enforces a strict Linux baseline. If firewall rules, SELinux, swap, hostname resolution, or kernel parameters are inconsistent across nodes, kubeadm init often fails immediately.

On every node, clear iptables rules, disable firewalld, turn off SELinux and swap, and complete /etc/hosts resolution. On OpenEuler in particular, verify that bridged traffic can pass through iptables.

The initialization steps can be combined into one script

# Clear rules and disable the firewall
iptables -t nat -F                    # Clear the NAT table
iptables -t filter -F                 # Clear the filter table
systemctl disable --now firewalld     # Disable the firewall

# Disable SELinux
setenforce 0                          # Temporarily disable SELinux
sed -i 's/SELINUX=enforcing/SELINUX=disabled/' /etc/selinux/config

# Disable swap
swapoff -a                            # Turn off swap
sed -i 's/.*swap.*/#&/' /etc/fstab    # Comment out swap at boot

This script standardizes node system state and prevents kubelet preflight checks from failing.

Kernel parameters and hostname resolution must be synchronized across all nodes

cat >/etc/hosts <<'EOF'
127.0.0.1 localhost
192.168.100.110 k8s-master
192.168.100.111 k8s-node1
192.168.100.112 k8s-node2
EOF

cat >>/etc/sysctl.conf <<'EOF'
net.ipv4.ip_forward=1
net.bridge.bridge-nf-call-ip6tables=1
net.bridge.bridge-nf-call-iptables=1
EOF

modprobe br_netfilter                  # Load the bridge filtering module
sysctl -p                              # Apply changes immediately

This step ensures mutual hostname resolution and provides the kernel support required for CNI networking and Service forwarding.

The runtime choice uses Docker plus cri-dockerd for Kubernetes compatibility

Because newer Kubernetes releases removed dockershim, the source article uses Docker + cri-dockerd, which remains a common compatibility path. Its main advantage is a lower learning curve. Its downside is a longer component chain than containerd.

On OpenEuler, first configure a Docker registry mirror, then install docker-ce, and finally deploy cri-dockerd with an explicit pause image setting. This reduces the risk of image pull failures in domestic network environments.

Docker and cri-dockerd require only a few critical commands

yum install -y docker-ce-28.0.3        # Install Docker
systemctl enable --now docker          # Start Docker

rpm -ivh cri-dockerd-0.3.8-3.el8.x86_64.rpm   # Install cri-dockerd
systemctl daemon-reload
systemctl enable --now cri-docker      # Start the CRI adapter layer

These commands bridge the container runtime with the Kubernetes CRI interface.

kubeadm control plane initialization depends on explicit network and runtime settings

The control plane initialization command is the core of this deployment. The source specifies the API server address, Alibaba Cloud image registry, Kubernetes version, Pod CIDR, Service CIDR, and CRI socket. Together, these parameters determine whether the cluster comes up successfully on the first attempt.

Use this command to initialize the control plane node

kubeadm init \
  --apiserver-advertise-address=192.168.100.110 \
  --image-repository=registry.aliyuncs.com/google_containers \
  --kubernetes-version=v1.28.15 \
  --pod-network-cidr=10.244.0.0/16 \
  --service-cidr=10.96.0.0/12 \
  --cri-socket=unix:///var/run/cri-dockerd.sock

This command generates control plane static Pods, certificates, kubeconfig files, and the node join token.

After initialization succeeds, save the kubeadm join output and configure the admin context on the control plane node:

mkdir -p $HOME/.kube
cp /etc/kubernetes/admin.conf $HOME/.kube/config   # Configure the kubectl context
kubectl get nodes                                  # Verify node status

This step gives the current host cluster administration access.

Worker nodes often show NotReady until the network plugin becomes available

The worker join command must include --cri-socket=unix:///var/run/cri-dockerd.sock. Otherwise, kubelet may fail to connect to the correct runtime.

If a node shows NotReady after joining, kubeadm is usually not the problem. In most cases, Calico has not finished deploying yet. Do not reset the cluster too early. Check the CNI manifests and image availability first.

Use this worker join command template

kubeadm join 192.168.100.110:6443 \
  --token <your-token> \
  --discovery-token-ca-cert-hash sha256:<your-hash> \
  --cri-socket=unix:///var/run/cri-dockerd.sock

This command registers the worker node with the control plane and adds it to the shared scheduler domain.

Calico is what makes Pod networking actually work

The source uses Calico v3.26.0. The key detail is not the install command itself, but the cidr value in custom-resources.yaml. It must exactly match kubeadm init --pod-network-cidr, or cross-node communication will fail.

In domestic environments, image pulls often become the bottleneck. A more reliable approach is to apply the manifests first, list the required images with kubectl describe pod, and then mirror them into an internal registry.

Dashboard and Metrics Server complete the visibility and observability layer

Dashboard is useful for training and demos, but it is not meant to be exposed publicly by default. The source changes the Service to NodePort: 30001 for quick access, then creates admin-user and binds it to cluster-admin to generate a login token.

Kubernetes Dashboard login page AI Visual Insight: This screen shows the HTTPS login entry point for Kubernetes Dashboard. The key signal is that the cluster has exposed a working web UI and supports ServiceAccount token-based authentication, which confirms that the API server, Dashboard Pod, and frontend proxy path are all functioning.

After login, you can inspect namespaces, workloads, Pods, and Services, which makes Dashboard a practical operational view for cluster management.

Kubernetes Dashboard home page AI Visual Insight: This screenshot reflects the post-login cluster overview page in Dashboard. It typically displays node, namespace, and resource object statistics, which indicates that RBAC authorization, frontend-to-API-server access, and object read permissions are all in effect.

Metrics Server provides the backend data source for kubectl top nodes and kubectl top pods. The source specifically highlights three compatibility adjustments: replacing images with domestic mirrors, changing probes to TCP, and adding --kubelet-insecure-tls. These are common fixes in self-signed certificate environments.

Create an admin account for Dashboard access

kubectl create serviceaccount admin-user -n kubernetes-dashboard
kubectl create clusterrolebinding admin-user \
  --clusterrole=cluster-admin \
  --serviceaccount=kubernetes-dashboard:admin-user   # Bind admin privileges
kubectl create token admin-user -n kubernetes-dashboard

These commands generate an administrator token for Dashboard login.

Understanding component responsibilities matters more than memorizing commands

The second half of the source breaks down the responsibilities of pause, kube-apiserver, controller-manager, scheduler, kube-proxy, etcd, CoreDNS, and Calico. This section is highly valuable because it tells you where to look when troubleshooting.

The shortest summary is this: kube-apiserver is the entry point, etcd is the state store, scheduler places Pods, controller-manager drives the cluster toward the desired state, kube-proxy handles Service forwarding, CoreDNS provides service discovery, and Calico provides connectivity and network policy.

Run these commands for a quick health check

kubectl get nodes -o wide              # View node status
kubectl get pods -A                    # View Pods in all namespaces
kubectl top nodes                      # Requires Metrics Server
kubectl -n kube-system get pods        # Check system components

These commands help you quickly verify the control plane, network layer, and monitoring path.

Single-control-plane resource planning must match cluster size

For a 1 control plane + 2 workers lab cluster, one control plane node is enough, but you must accept the single point of failure risk. The source guidance can be summarized as follows: for small clusters, 2 to 4 CPU cores and 4 to 8 GB of memory are usually sufficient. For medium-sized or larger environments, use a three-control-plane high-availability design and place etcd on SSD storage.

If your workloads make frequent API calls or your Pod count grows quickly, upgrade control plane CPU and disk IOPS before focusing only on worker capacity.

FAQ

FAQ 1: Why does a node remain NotReady after joining?

In most cases, the CNI plugin such as Calico is not installed, the Pod CIDR does not match, or required images failed to pull. Start by checking kubectl get pods -A and the status of Calico components.

FAQ 2: Why must swap be disabled?

kubelet requires swap to be disabled by default so that scheduling and resource management remain predictable. If swap is still enabled, kubeadm init or kubelet preflight checks often fail immediately.

FAQ 3: Can a single-control-plane cluster run in production?

It is not recommended. This design is suitable for learning, testing, and functional validation. Production environments should use a multi-control-plane high-availability architecture, with separate planning for etcd, monitoring, backups, and ingress traffic.

Core takeaway

This guide reconstructs the full process of deploying a Kubernetes 1.28 single-control-plane cluster on OpenEuler 24.03 with Docker, cri-dockerd, and kubeadm. It covers system initialization, control plane setup, Calico networking, Dashboard, Metrics Server, and the responsibilities of core Kubernetes components.