-

初遇Kubernetes

  • 本文仅对一些组件和少量概念进行整理,并没涉及更多概念。
  • 请根据自己的需求酌情阅读


  • 考完CKA啦

简单集群

Kubernetes_Dashboard

依赖

docker


# sudo apt install docker.io
## docker配置 
# vim /etc/docker/daemon.json
{
    "registry-mirrors": [ #dockerhub镜像
        "http://f1361db2.m.daocloud.io",
        "http://hub-mirror.c.163.com",
        "https://registry.docker-cn.com"
    ],
    "exec-opts": ["native.cgroupdriver=systemd"], #设置`cgroup-driver`为`systemd`
    "log-driver": "json-file",
    "log-opts": {
        "max-size": "100m"
    },
    "storage-driver": "overlay2"
}
# systemctl daemon-reload
# systemctl restart docker

kubelet、kubeadm、kubectl


## 谷歌源
# sudo apt-get update && sudo apt-get install -y apt-transport-https
# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
## 阿里源 
# sudo apt-get update && sudo apt-get install -y apt-transport-https
# curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
echo "deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main" | sudo tee -a /etc/apt/sources.list.d/kubernetes.list
# sudo apt-get update
## 安装 `kubeadm` `kubelet` `kubectl`
# apt-get install -y kubeadm kubelet kubectl # apt-mark hold kubelet kubeadm kubectl
# sudo systemctl enable kubelet && systemctl start kubelet

另外清华tuna

kubeadm安装

默认初始化配置:


# kubeadm config print ini-defaults
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
  - system:bootstrappers:kubeadm:default-node-token
  token: abcdef.0123456789abcdef
  ttl: 24h0m0s
  usages:
  - signing
  - authentication
kind: InitConfiguration
localAPIEndpoint:
  advertiseAddress: 1.2.3.4
  bindPort: 6443
nodeRegistration:
  criSocket: /var/run/dockershim.sock
  name: yohane01
  taints:
  - effect: NoSchedule
    key: node-role.kubernetes.io/master
---
apiServer:
  timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
  type: CoreDNS
etcd:
  local:
    dataDir: /var/lib/etcd
imageRepository: gcr.io/google-containers
kind: ClusterConfiguration
kubernetesVersion: v1.18.0
networking:
  dnsDomain: cluster.local
  serviceSubnet: 10.211.55.0/24
scheduler: {}

修改配置:


# cat <<EOF > init-config.yaml
> apiVersion: kubeadm.k8s.io/v1beta2
> kind: ClusterConfiguration
> imageRepository: registry.aliyuncs.com/google_containers #阿里源
> kubernetesVersion: v1.18.0
> networking:
>   podSubnet: "10.211.0.0/16" #pod子网网段
> EOF

集群配置初始化:


# kubeadm config images pull --config=init-config.yaml
[init] Using Kubernetes version: v1.18.0
[preflight] Running pre-flight checks
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [yohane01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.211.55.5]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [yohane01 localhost] and IPs [10.211.55.5 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [yohane01 localhost] and IPs [10.211.55.5 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0407 07:01:37.377879   15975 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0407 07:01:37.382433   15975 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 19.505620 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node yohane01 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node yohane01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: 78531x.dj83yq156v4u8ygy
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.211.55.5:6443 --token 78531x.dj83yq156v4u8ygy \
    --discovery-token-ca-cert-hash sha256:a9319a8cb2d7fdc500e0dfeb4d2db83a9778f16583f65042bf5b9a339ddd49f7
    
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config

节点加入集群:


# cat <<EOF > join-config.yaml
> apiVersion: kubeadm.k8s.io/v1beta2
> kind: JoinConfiguration
> discovery:
>  bootstrapToken:
>    apiServerEndpoint: 10.211.55.5:6443
>    token: 78531x.dj83yq156v4u8ygy
>    unsafeSkipCAVerification: true
>  tlsBootstrapToken: 78531x.dj83yq156v4u8ygy
> EOF

# kubeadm join --config=join-config.yaml
W0407 07:05:36.919459    4484 join.go:346] [preflight] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...

This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.

Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看相关状态:


# kubectl get nodes
NAME       STATUS   ROLES    AGE   VERSION
yohane02   Ready    <none>   1m   v1.18.0
yohnae01   Ready    master   1m   v1.18.0

# kubectl get pods --all-namespaces
NAMESPACE              NAME                                         READY   STATUS    RESTARTS   AGE
kube-system            coredns-7ff77c879f-d6shj                     1/1     Running   1          1m
kube-system            coredns-7ff77c879f-zz2mq                     1/1     Running   1          1m
kube-system            etcd-yohane01                                1/1     Running   1          1m
kube-system            kube-apiserver-yohane01                      1/1     Running   1          1m
kube-system            kube-controller-manager-yohane01             1/1     Running   16         1m
kube-system            kube-proxy-64j96                             1/1     Running   1          1m
kube-system            kube-proxy-z7qfn                             1/1     Running   1          1m
kube-system            kube-scheduler-yohane01                      1/1     Running   18         1m
kube-system            weave-net-9jslw                              2/2     Running   4          1m
kube-system            weave-net-x8l9c                              2/2     Running   3          1m

相关文件/路径


## 组件相关配置
/etc/kubernetes/ 
├── admin.conf
├── controller-manager.conf
│── scheduler.conf
├── kubelet.conf
├── manifests ## 组件容器化相关配置
│   ├── etcd.yaml
│   ├── kube-apiserver.yaml
│   ├── kube-controller-manager.yaml
│   └── kube-scheduler.yaml
└── pki ## 组件相关证书
    ├── apiserver.crt
    ├── apiserver-etcd-client.crt
    ├── apiserver-etcd-client.key
    ├── apiserver.key
    ├── apiserver-kubelet-client.crt
    ├── apiserver-kubelet-client.key
    ├── ca.crt
    ├── ca.key
    ├── etcd
    │   ├── ca.crt
    │   ├── ca.key
    │   ├── healthcheck-client.crt
    │   ├── healthcheck-client.key
    │   ├── peer.crt
    │   ├── peer.key
    │   ├── server.crt
    │   └── server.key
    ├── front-proxy-ca.crt
    ├── front-proxy-ca.key
    ├── front-proxy-client.crt
    ├── front-proxy-client.key
    ├── sa.key
    └── sa.pub
    
## kubernetes 运行时文件路径
/var/lib/kubelet/
├── config.yaml
├── cpu_manager_state
├── device-plugins
├── kubeadm-flags.env
├── pki
├── plugins
├── plugins_registry
├── pod-resources
└── pods

## kubelet 启动配置文件
# cat  /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
# Note: This dropin only works with kubeadm and kubelet v1.11+
[Service]
Environment="KUBELET_KUBECONFIG_ARGS=--bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf"
Environment="KUBELET_CONFIG_ARGS=--config=/var/lib/kubelet/config.yaml"
# This is a file that "kubeadm init" and "kubeadm join" generates at runtime, populating the KUBELET_KUBEADM_ARGS variable dynamically
EnvironmentFile=-/var/lib/kubelet/kubeadm-flags.env
# This is a file that the user can use for overrides of the kubelet args as a last resort. Preferably, the user should use
# the .NodeRegistration.KubeletExtraArgs object in the configuration files instead. KUBELET_EXTRA_ARGS should be sourced from this file.
EnvironmentFile=-/etc/default/kubelet
ExecStart=
ExecStart=/usr/bin/kubelet $KUBELET_KUBECONFIG_ARGS $KUBELET_CONFIG_ARGS $KUBELET_KUBEADM_ARGS $KUBELET_EXTRA_ARGS

## CRI-O 相关路径
/etc/crio/

## CNI 相关路径
/etc/cni/net.d/ CNI配置文件路径
/opt/cni/bin/ CNI相关二进制文件路径

组件

此处应该有一张图(坑

Master组件/控制平面组件(Control Plane Components)

kubectl

用来与集群通信的命令行工具。
常用命令:


kubectl create -f xxx-rc/svc.yaml //根据yaml配置文件创建RC/SVC

kubectl get rc //查看RC
kubectl get rs //查看RS
kubectl get svc //查看SVC
kubectl get deployments //查看 deployment
kubectl get pods //查看pod
kubectl get nodes //查看node
kubectl get namespaces // 查看namespace

kubectl describe node <node_name> //查看节点详情

kubectl scala rc 伸缩资源

kube-apiserver

运行在Master节点,对外提供的RESTful API,增强可编程行,是Kubernetes的前端控制组件

相关组件:

  • kubectl交互
  • etcd存储kubernetes组件通信的数据
  • kube-controller-managerkube-schedulerkubelet协调资源对象的创建、修改 、删除。

etcd

保存 Kubernetes 所有集群数据的后台数据库

相关组件

  • kube-controller-managerkube-schedulerkubelet监听etcd中资源对象的变化。

kube-scheduler

监视和调度Pod,经过各种调度决策算法,将Pod运行在最佳的节点

相关组件:

  • 依赖于kube-apiserver服务
  • 根据调度策略和算法让kubelet管理pod生命周期

kube-controller-manager

核心控制器组件,其下包含

  • Replication Controller:负责为系统中的每个副本控制器对象维护正确数量的 Pod。
  • Node Controller:负责在节点出现故障时进行通知和响应。
  • ResourceQuota Controller:负责资源配额,确保指定的资源对象在任何时候都不会超量占用系统物理资源。
  • Namespace Controller:负责管理命名空间下的资源对象。
  • ServiceAccount Controller & Token Controller:为新的命名空间创建默认帐户和 API 访问令牌。
  • Service Controller:负责管理所有Service资源对象。
  • Endpoint Controller:填充端点(Endpoints)对象(即加入 ServicePod)。

相关组件:

  • 依赖于kube-apiserver服务

Node 组件

kubelet

集群节点上运行的代理组件,处理Master下发到节点的任务

相关组件:

  • kube-apiserver服务注册节点信息并报告节点状态

kube-proxy

集群节点上运行的网络代理组件,负责Service的负载均衡、通信、故障恢复、流量转发。

容器运行环境(Container Runtime)

Kubernetes 支持多个容器运行环境: Dockercontainerdcri-orktlet 以及任何实现 Kubernetes CRI (容器运行环境接口),通常是Docker

其他

pod

PodKubernetes 应用程序的基本执行单元,即它是 Kubernetes 对象模型中创建或部署的最小和最简单的单元,通过Pause容器实现抽象隔离。

service

一组Pod的抽象,定义了一个服务的入口,可通过Label Selector绑定一组Pod,实现服务的高可用。

K8s中的IP和Port

IP

  • Pod IP:集群中Pod的IP,位于不同NodePod可以通过Pod IP进行通信,由Docker在bridge模式下分配的IP,有网络组件保证集群内IP不冲突。
  • Node IPNode的IP地址,节点物理网卡的真实IP地址,通过此可从集群外部访问集群内部服务。
  • Cluster IPService的虚拟IP(Service IP/VIP)地址,通过kube-proxy实现负载均衡。

Port

  • containerPort:容器需要监听的端口号
  • hostPort:容器所在主机需要监听的端口号

apiVersion: v1
kind: Pod
...
spec:
  containers:
      ...
      ports:
        - containerPort: 8080 # 容器暴露(EXPOSE)的端口
          hostPort: 8080 # 物理主机监听的端口
...
  • port:服务监听的端口号
  • targetPort: 转发到后端Pod的端口
  • nodePort:映射到物理主机的端口号

apiVersion: v1
kind: Service
...
spec:
 type: NodePort         
 ports:
 - port: 10080          # 服务监听端口
   targetPort: 80       # 转发到后端Pod的端口
   nodePort: 10000      # 映射到节点物理主机端口
...
IP+PortcontainerPorthostPorttargetPortportnodePort
Pod IP集群内部访问(Endpoint)----
Cluster IP---集群内部访问-
Node IP-可集群外部访问--可集群外部访问

Q&A

ErrImagePull/ImagePullBackOf


# kubectl get pod --all-namespaces
NAMESPACE              NAME                                         READY   STATUS             RESTARTS   AGE
...
kube-system            metrics-server-5f956b6d5f-xqb5k              0/1     ImagePullBackOff   0          56m
...

# kubectl describe  pod metrics-server-5f956b6d5f-xqb5k  -n kube-system
Name:         metrics-server-5f956b6d5f-xqb5k
Namespace:    kube-system
Status:       Pending
...
Containers:
  metrics-server:
    Container ID:
    Image:         k8s.gcr.io/metrics-server-amd64:v0.3.6
    ...
    State:          Waiting
      Reason:       ImagePullBackOff
...
Events:
  Type     Reason     Age                   From               Message
  ----     ------     ----                  ----               -------
  Normal   Scheduled  <unknown>             default-scheduler  Successfully assigned kube-system/metrics-server-5f956b6d5f-xqb5k to yohane02
  Warning  Failed     55m (x4 over 57m)     kubelet, yohane02  Failed to pull image "k8s.gcr.io/metrics-server-amd64:v0.3.6": rpc error: code = Unknown desc = Error response from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
  Warning  Failed     55m (x4 over 57m)     kubelet, yohane02  Error: ErrImagePull
  Normal   BackOff    12m (x181 over 57m)   kubelet, yohane02  Back-off pulling image "k8s.gcr.io/metrics-server-amd64:v0.3.6"
  Warning  Failed     7m8s (x203 over 57m)  kubelet, yohane02  Error: ImagePullBackOff
  Normal   Pulling    2m22s (x15 over 57m)  kubelet, yohane02  Pulling image "k8s.gcr.io/metrics-server-amd64:v0.3.6"
  

镜像pull失败(被墙了=- =
当然是设置代理啦

设置代理


## 设置系统全局代理
export http_proxy=http://127.0.0.1:1087;
export https_proxy=http://127.0.0.1:1087;

## 或设置Docker代理
# cat <<EOF > /etc/systemd/system/docker.service.d/http-proxy.conf
> Environment="HTTP_PROXY=http://192.168.0.106:1087"
> Environment="HTTPS_PROXY=http://192.168.0.106:1087"
> Environment="NO_PROXY=localhost,127.0.0.1"
> EOF
# systemctl daemon-reload
# systemctl restart docker

或更换gcr镜像源

国内镜像源:

  • registry.aliyuncs.com/google_containers # 阿里源
  • registry.cn-hangzhou.aliyuncs.com/google_containers # 阿里杭州源
  • gcr.azk8s.cn/google_containers # 微软源 不推荐使用 会时常挂掉

修改yaml文件相关参数


imageRepository: registry.aliyuncs.com/google_containers
# 或
image:registry.aliyuncs.com/google_containers/<imageName>:<version>

TIP

DockerHub的一些镜像源

Docker镜像源设置


# vim /etc/docker/daemon.json
{
...
    "registry-mirrors": [ 
        "http://f1361db2.m.daocloud.io",
        "http://hub-mirror.c.163.com",
        "https://registry.docker-cn.com"
    ],
 ...
}
# systemctl restart docker

参考

  • Kubernetes Documentation
  • 龚正,吴治辉,崔秀龙,闫健勇.Kubernetes权威指南:从Docker到Kubernetes实践全接触(第4版)[M].电子工业出版社,2019.6