搭建Kubernetes集群

实验环境

我在公司内网的PVE上安装了3个VM,k[1:3] 虚拟机的os 是ubuntu 22.04.3 装了docker-ce,ubuntu 上的containerd 是docker 提供的。

1
2
3
4
5
# 记得把swap关了
swapoff -a

# 把自己(ubuntu)加到docker 组里,这样调用docker的时候就不用sudo了。newgrp docker是切组,也可以退出当前会话再登录。
sudo usermod -aG docker $USER && newgrp docker

kubeadm

这个东西真的蛮复杂的,按照官方的教程我确实跑起了一个,但是因为我起的时候没有安装网络组件(cni)搞了2天各种问题,所以放弃了。下面是步骤:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/#containerd
# 预先要添加docker的apt repo https://docs.docker.com/engine/install/ubuntu/
sudo apt install -y containerd.io

# 这里装了containerd.io后还需要把 /etc/containerd/config.toml 配置里的下面这行注释掉:
# disabled_plugins = ["cri"]
# 然后重启服务

# 装kube三兄弟
curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.29/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.29/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt install kubeadm kubelet kubectl
# 锁定包版本,解锁的时候用unhold
sudo apt-mark hold kubelet kubeadm kubectl


# 起集群
sudo kubeadm init --pod-network-cidr=10.244.0.0/18 --service-cidr=10.244.64.0/18

# 得到下面的输出就是ok了

# Your Kubernetes control-plane has initialized successfully!
# To start using your cluster, you need to run the following as a regular user:
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
# Alternatively, if you are the root user, you can run:
# export KUBECONFIG=/etc/kubernetes/admin.conf
# You should now deploy a pod network to the cluster.
# Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
# https://kubernetes.io/docs/concepts/cluster-administration/addons/
# Then you can join any number of worker nodes by running the following on each as root:
# kubeadm join 172.17.0.220:6443 --token 22tkpm.0eetw95la7mtv6qm \
# --discovery-token-ca-cert-hash sha256:****************************************************************

# 添加节点就按上面输出的最后一行 kubeadm join来

祝君好运~

另寻出路

发现自己手挫不成后就找找别的工具。看了一下kops和rancher都是基于云的,我的目标是本地裸机不符合 于是我看了最早的kubesphere的一个工具kubekey。

kubekey

这货也是坑,不过比kubeadm好多了。 kubekey可以只用来部署k8s 不安装kubesphere,参照这个文章 How to Install Kubernetes the Easy Way Using KubeKey 确实是可以跑起一个集群的。

小坑

k8s版本问题,我贴一下我的config,这里的版本是我从1.29.0一点一点试下去的!因为用的registry是kubesphere团队自己维护的并不是registry.k8s.io所以有坑。倒是可以试试把registy切换到google,但是我没有精力折腾了,有缘人如果有心可以给我留言。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
apiVersion: kubekey.kubesphere.io/v1alpha2
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: node1, address: 172.17.0.220, internalAddress: 172.17.0.220, user: ubuntu, privateKeyPath: "~/.ssh/id_rsa"}
- {name: node2, address: 172.17.0.221, internalAddress: 172.17.0.221, user: ubuntu, privateKeyPath: "~/.ssh/id_rsa"}
- {name: node3, address: 172.17.0.222, internalAddress: 172.17.0.222, user: ubuntu, privateKeyPath: "~/.ssh/id_rsa"}
roleGroups:
etcd:
- node1
control-plane:
- node1
worker:
- node1
- node2
- node3
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.22.17
clusterName: cluster.local
autoRenewCerts: true
etcd:
type: kubekey
network:
plugin: calico
kubePodsCIDR: 10.244.64.0/18
kubeServiceCIDR: 10.244.0.0/18
multusCNI:
enabled: false
registry:
privateRegistry: ""
namespaceOverride: ""
registryMirrors: []
insecureRegistries: []
addons: []

额外的

我是在我本机执行的kk create cluster 。其实完全可以在节点里找一台做主控来开集群。但是kk 本来就是个远程的playbook,他核心是ansible 只要有ssh权限就ok。在集群起来后需要把集群的管理配置拉到本地和本机的配置合并一下然。

1
2
3
4
# k1 是control-plane,看我上面的配置
scp k1:~/.kube/config ./my-cluster
KUBECONFIG=~/.kube/config:./my-cluster kubectl config view --flatten > new-kubeconfig
mv new-kubeconfig ~/.kube/config

列出所有可控集群

1
2
3
4
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
kind-kind-cluster kind-kind-cluster kind-kind-cluster
* [email protected] cluster.local kubernetes-admin

切换上下文

1
$ kubectl config use-context [email protected]