博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
Kubernetes 第三章 kubeadm
阅读量:5105 次
发布时间:2019-06-13

本文共 26467 字,大约阅读时间需要 88 分钟。

kubeadm

 https://kubernetes.io/zh/docs/reference/setup-tools/kubeadm/kubeadm-init/

Kubeadm是一个工具,旨在提供kubeadm init和kubeadm join最佳实践“快速路径”,用于创建Kubernetes集群。kubeadm执行必要的操作以使最小的可行群集启动并运行。按照设计,它只关心自举,而不关心配置机器。同样,安装各种有用的插件,如Kubernetes Dashboard,监控解决方案和特定于云的插件,不在范围内。相反,我们希望在kubeadm之上构建更高级别和更多定制的工具,理想情况下,使用kubeadm作为所有部署的基础将更容易创建符合要求的集群

 

主要功能

kubeadm init来引导Kubernetes控制平面节点kubeadm join以引导Kubernetes工作节点并将其加入群集kubeadm upgrade 将Kubernetes集群升级到更新版本kubeadm config如果使用kubeadm v1.7.x或更低版本初始化群集,则为其配置群集kubeadm upgrade用于管理令牌的kubeadm令牌kubeadm joinkubeadm reset 以恢复由kubeadm init或对此主机所做的任何更改kubeadm joinkubeadm version 打印kubeadm版本kubeadm alpha 可预览一组可用于收集社区反馈的功能

 

测试:

使用kubeadm  在三台主机上部署

master: 10.2.61.21   

安装 // docker-ce kubelet kubeadm kubectl

node:  10.2.61.22

node : 10.2.61.23

 

1. 配置 master 

[root@localhost yum.repos.d]# yum install docker-ce kubelet kubeadm kubectl -y  

启动 br_netfilter 功能

功能介绍 bridge-netfilter代码启用以下功能:{Ip,Ip6,Arp}表可以过滤桥接的IPv4 / IPv6 / ARP数据包,即使封装在802.1Q VLAN或PPPoE报头中也是如此。这启用了有状态透明防火墙的功能。因此,3个工具的所有过滤,日志记录和NAT功能都可以用于桥接帧。结合ebtables,bridge-nf代码因此使Linux成为一个非常强大的透明防火墙。这使得fe能够创建透明的伪装机器(即所有本地主机都认为它们直接连接到因特网)。让{ip,ip6,arp}表看到桥接流量可以使用适当的proc条目禁用或启用,位于 /proc/sys/net/bridge/:bridge-nf-call-arptablesbridge-nf-call-iptablesbridge-nf-call-ip6tables此外,允许上述防火墙工具看到桥接的802.1Q VLAN和PPPoE封装的数据包可以在同一目录中禁用或启用proc条目:bridge-nf-filter-vlan-taggedbridge-nf-filter-pppoe-tagged这些proc条目只是常规文件。将“1”写入文件(echo 1 > file)可启用特定功能,而向文件写入“0”则禁用该功能。
[root@localhost sysctl.d]# lsmod |grep br_netfilter[root@localhost sysctl.d]# modprobe br_netfilter      #装载模块[root@localhost sysctl.d]# lsmod |grep br_netfilterbr_netfilter           22256  0 bridge                146976  1 br_netfilter[root@localhost sysctl.d]# sysctl -p /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1[root@localhost sysctl.d]# cat k8s.conf          #设置策略  net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1[root@localhost sysctl.d]#

 

编辑docker.service 配置文件

root@localhost /]# cat /usr/lib/systemd/system/docker.service [Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comBindsTo=containerd.serviceAfter=network-online.target firewalld.service containerd.serviceWants=network-online.targetRequires=docker.socket[Service]Type=notify# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by dockerEnvironment="HTTPS_PROXY=http://www.ik8s.io:10080 NO_PROXY=10.0.0.0/8"   //由于许多镜像都需要docker 去k8s 官网下载,但是由于某些原因无法访问,因此使用别人的代理工具进行下载 ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sockExecReload=/bin/kill -s HUP $MAINPID

  

2. kubeadm init [flags] 初始化一个master 节点

   Init 命令的工作流程

kubeadm init 命令通过执行下列步骤来启动一个 Kubernetes master 节点。1. 在做出变更前运行一系列的预检项来验证系统状态。 一些检查项目仅仅触发警告,其它的则会被视为错误并且退出 kubeadm,	除非问题被解决或者用户指定了 --ignore-preflight-errors=
参数。 2. 生成一个自签名的 CA证书 (或者使用现有的证书,如果提供的话) 来为集群中的每一个组件建立身份标识。 如果用户已经通过 --cert-dir 配置的证书目录(缺省值为 /etc/kubernetes/pki)提供了他们自己的 CA证书 以及/或者 密钥, 那么将会跳过这个步骤,正如文档使用自定义证书中所描述的那样。 如果指定了 --apiserver-cert-extra-sans 参数, APIServer 的证书将会有额外的 SAN 条目,如果必要的话,将会被转为小写。 3. 将 kubeconfig 文件写入 /etc/kubernetes/ 目录以便 kubelet、controller-manager 和 scheduler 用来连接到 API server, 它们每一个都有自己的身份标识,同时生成一个名为 admin.conf 的独立的 kubeconfig 文件,用于管理操作。 4. 如果 kubeadm 被调用时附带了 --feature-gates=DynamicKubeletConfig 参数, 它会将 kubelet 的初始化配置写入 /var/lib/kubelet/config/init/kubelet 文件中。 参阅 通过配置文件设置 Kubelet 参数以及 在一个现有的集群中重新配置节点的 Kubelet 设置来获取更多关于动态配置 Kubelet 的信息。 这个功能现在是默认关闭的,正如你所见它通过一个功能开关控制开闭, 但是在未来的版本中很有可能会默认启用。 5. 为 API server、controller manager 和 scheduler 生成静态 Pod 的清单文件。假使没有提供一个外部的 etcd 服务的话,也会为 etcd 生成一份额外的静态 Pod 清单文件。6. 静态 Pod 的清单文件被写入到 /etc/kubernetes/manifests 目录; kubelet 会监视这个目录以便在系统启动的时候创建 Pods。 一旦 control plane 的 Pods 都运行起来, kubeadm init 的工作流程就继续往下执行。

 

root@localhost ~]# cat /etc/docker/daemon.json {"registry-mirrors": ["https://registry.docker-cn.com"],"exec-opts": ["native.cgroupdriver=systemd"] //Cgroup Driver: systemd   方式选择 systemd }[root@localhost ~]#

  进行初始化

//在初始化之前,我通过 docker Hub 和github 自动创建镜像的方式将所需要的镜像都下载到了本地,然后tag 下标签 //通过 [kubeadm config images  list] 命令查看所需要的image 及版本  //通过--ignore-preflight-errors=Swap 参数不能消除初始化错误,在启动时还是导致 kubelet 无法启动,因此需要将awapoff -a 关掉才行。
//在执行 kubeadm init 时前面报错没有执行成功,重新执行时需要kubeadm reset 一下,清理下前面生成的数据,否则会提示已存在无法执行。 [root@localhost ~]# kubeadm init --kubernetes-version=v1.15.0  --ignore-preflight-errors=Swap[init] Using Kubernetes version: v1.15.0[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Activating the kubelet service[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.2.61.21 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.2.61.21 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "ca" certificate and key[certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.2.61.21][certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 34.502758 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: 1awp15.lkz231yb9nbhbdx4[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:     //提示要使用kuberetes 集群需要执行以下几个命令  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.2.61.21:6443 --token 1awp15.lkz231yb9nbhbdx4 \    --discovery-token-ca-cert-hash sha256:fe9433078dc9ea4eba963ab00b7dd388a24c2367152dff5ac07ac89ef8856849 [root@localhost ~]#

  

 由于使用root 因此不配置属主属组

mkdir -p $HOME/.kube  cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

 

Kubectl  

Kubectl是一个命令行界面,用于运行针对Kubernetes集群的命令。kubectl在$ HOME / .kube目录中查找名为config的文件。您可以通过设置KUBECONFIG环境变量或设置标志来指定其他kubeconfig文件--kubeconfig。

 

  

root@localhost /]# kubectl get pods --all-namespaces // 查看当前pods 所有名称空间状态NAMESPACE     NAME                                            READY   STATUS                  RESTARTS   AGEkube-system   coredns-5c98db65d4-bzvgt                        0/1     Pending                 0          21hkube-system   coredns-5c98db65d4-c2zw8                        0/1     Pending                 0          21hkube-system   etcd-localhost.localdomain                      1/1     Running                 0          21hkube-system   kube-apiserver-localhost.localdomain            1/1     Running                 0          21hkube-system   kube-controller-manager-localhost.localdomain   1/1     Running                 0          21hkube-system   kube-flannel-ds-amd64-wkmx6                     0/1     Init:ImagePullBackOff   0          17m   //提示flannel 网络模块一直无法加载,节点状态NotReadykube-system   kube-proxy-lb4jf                                1/1     Running                 0          21hkube-system   kube-scheduler-localhost.localdomain            1/1     Running                 0          21h[root@localhost /]#

  

——————————————————————————————————————————————————————————————————————————————————-

//上面配置有点问题, https://stackoverflow.com/questions/52098214/kube-flannel-in-crashloopbackoff-status 说需要指定--pod-network-cidr 网络

通过kubeadm reset   重置,同时需要删除 $HOME 目录下的.kube 文件,然后提示什么文件还存在就删除

 kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12 --ignore-preflight-errors=Swap

[root@localhost lib]# kubeadm init --kubernetes-version=v1.15.0 --pod-network-cidr=10.244.0.0/16  --service-cidr=10.96.0.0/12  --ignore-preflight-errors=Swap[init] Using Kubernetes version: v1.15.0[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Activating the kubelet service[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [localhost.localdomain kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.2.61.21][certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.2.61.21 127.0.0.1 ::1][certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [localhost.localdomain localhost] and IPs [10.2.61.21 127.0.0.1 ::1][certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 36.002842 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node localhost.localdomain as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: wxgx55.vjdl3ampsahtlkl3[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.2.61.21:6443 --token wxgx55.vjdl3ampsahtlkl3 \    --discovery-token-ca-cert-hash sha256:caf8238dcdcbc374eb304612a08f13d296186cbe01e3941d3c919d97a7820809 [root@localhost lib]#

  

自动部署flannel 网络

https://github.com/coreos/flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
[root@localhost /]# kubectl get pods --all-namespacesNAMESPACE     NAME                                            READY   STATUS                  RESTARTS   AGEkube-system   coredns-5c98db65d4-pvkq7                        0/1     Pending                 0          21mkube-system   coredns-5c98db65d4-wwfqp                        0/1     Pending                 0          21mkube-system   etcd-localhost.localdomain                      1/1     Running                 0          20mkube-system   kube-apiserver-localhost.localdomain            1/1     Running                 0          20mkube-system   kube-controller-manager-localhost.localdomain   1/1     Running                 0          20mkube-system   kube-flannel-ds-amd64-rns7x                     0/1     Init:ImagePullBackOff   0          11mkube-system   kube-proxy-qg9lx                                1/1     Running                 0          21mkube-system   kube-scheduler-localhost.localdomain            1/1     Running                 0          20m[root@localhost /]#

  

  

 问题排查使用  kubectl describe pod -n kube-system kube-flannel-ds-amd64-rns7x

[root@localhost /]# kubectl describe pod -n kube-system kube-flannel-ds-amd64-rns7xName:           kube-flannel-ds-amd64-rns7xNamespace:      kube-systemPriority:       0Node:           localhost.localdomain/10.2.61.21Start Time:     Wed, 10 Jul 2019 15:48:59 +0800Labels:         app=flannel                controller-revision-hash=7f489b5c67                pod-template-generation=1                tier=nodeAnnotations:    
Status: PendingIP: 10.2.61.21Controlled By: DaemonSet/kube-flannel-ds-amd64Init Containers: install-cni: Container ID: Image: quay.io/coreos/flannel:v0.11.0-amd64 Image ID: Port:
Host Port:
Command: cp Args: -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conflist State: Waiting Reason: ImagePullBackOff Ready: False Restart Count: 0 Environment:
Mounts: /etc/cni/net.d from cni (rw) /etc/kube-flannel/ from flannel-cfg (rw) /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-2jfdz (ro)Containers: kube-flannel: Container ID: Image: quay.io/coreos/flannel:v0.11.0-amd64 Image ID: Port:
Host Port:
Command: /opt/bin/flanneld Args: --ip-masq --kube-subnet-mgr State: Waiting Reason: PodInitializing Ready: False Restart Count: 0 Limits: cpu: 100m memory: 50Mi Requests: cpu: 100m memory: 50Mi Environment: POD_NAME: kube-flannel-ds-amd64-rns7x (v1:metadata.name) POD_NAMESPACE: kube-system (v1:metadata.namespace) Mounts: /etc/kube-flannel/ from flannel-cfg (rw) /run/flannel from run (rw) /var/run/secrets/kubernetes.io/serviceaccount from flannel-token-2jfdz (ro)Conditions: Type Status Initialized False Ready False ContainersReady False PodScheduled True Volumes: run: Type: HostPath (bare host directory volume) Path: /run/flannel HostPathType: cni: Type: HostPath (bare host directory volume) Path: /etc/cni/net.d HostPathType: flannel-cfg: Type: ConfigMap (a volume populated by a ConfigMap) Name: kube-flannel-cfg Optional: false flannel-token-2jfdz: Type: Secret (a volume populated by a Secret) SecretName: flannel-token-2jfdz Optional: falseQoS Class: GuaranteedNode-Selectors: beta.kubernetes.io/arch=amd64Tolerations: :NoSchedule node.kubernetes.io/disk-pressure:NoSchedule node.kubernetes.io/memory-pressure:NoSchedule node.kubernetes.io/network-unavailable:NoSchedule node.kubernetes.io/not-ready:NoExecute node.kubernetes.io/pid-pressure:NoSchedule node.kubernetes.io/unreachable:NoExecute node.kubernetes.io/unschedulable:NoScheduleEvents: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 14m default-scheduler Successfully assigned kube-system/kube-flannel-ds-amd64-rns7x to localhost.localdomain Warning Failed 3m14s kubelet, localhost.localdomain Failed to pull image "quay.io/coreos/flannel:v0.11.0-amd64": rpc error: code = Unknown desc = context canceled Warning Failed 3m14s kubelet, localhost.localdomain Error: ErrImagePull Normal BackOff 3m13s kubelet, localhost.localdomain Back-off pulling image "quay.io/coreos/flannel:v0.11.0-amd64" Warning Failed 3m13s kubelet, localhost.localdomain Error: ImagePullBackOff Normal Pulling 3m (x2 over 14m) kubelet, localhost.localdomain Pulling image "quay.io/coreos/flannel:v0.11.0-amd64"   //根据提示信息说明还是 quay.io/coreos/flannel:v0.11.0-amd64 镜像pull 不下来。通过Docker hub 加github 重新pull 一下
 // 保证 docker image ls 时能看到 quay.io/coreos/flannel:v0.11.0-amd64 镜像 ,然后重新执行 //kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
 
[root@localhost /]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlpodsecuritypolicy.extensions/psp.flannel.unprivileged configuredclusterrole.rbac.authorization.k8s.io/flannel unchangedclusterrolebinding.rbac.authorization.k8s.io/flannel unchangedserviceaccount/flannel unchangedconfigmap/kube-flannel-cfg unchangeddaemonset.extensions/kube-flannel-ds-amd64 unchangeddaemonset.extensions/kube-flannel-ds-arm64 unchangeddaemonset.extensions/kube-flannel-ds-arm unchangeddaemonset.extensions/kube-flannel-ds-ppc64le unchangeddaemonset.extensions/kube-flannel-ds-s390x unchanged[root@localhost /]# [root@localhost /]# kubectl get pods --all-namespacesNAMESPACE     NAME                                            READY   STATUS    RESTARTS   AGEkube-system   coredns-5c98db65d4-pvkq7                        1/1     Running   0          66mkube-system   coredns-5c98db65d4-wwfqp                        1/1     Running   0          66mkube-system   etcd-localhost.localdomain                      1/1     Running   0          66mkube-system   kube-apiserver-localhost.localdomain            1/1     Running   0          66mkube-system   kube-controller-manager-localhost.localdomain   1/1     Running   0          65mkube-system   kube-flannel-ds-amd64-rns7x                     1/1     Running   0          56mkube-system   kube-proxy-qg9lx                                1/1     Running   0          66mkube-system   kube-scheduler-localhost.localdomain            1/1     Running   0          65m[root@localhost /]# kubectl get nodesNAME                    STATUS   ROLES    AGE   VERSIONlocalhost.localdomain   Ready    master   67m   v1.15.0[root@localhost /]# [root@localhost /]#

  

  

 Node 配置

 

注意事项:swapoff -a  关掉swap

启用  modprobe br_netfilter 

开启 bridge ,提前下载 docker image 

# yum install docker-ce kubelet kubeadm kubectl

 

[root@localhost sysctl.d]# sysctl -p /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1

 

 

 

#我通过 docker image save|load 命令线下导入了镜像 #[root@localhost ~]# docker image save -o /root/kube.tar k8s.gcr.io/kube-apiserver:v1.15.0 k8s.gcr.io/kube-controller-manager:v1.15.0 k8s.gcr.io/kube-scheduler:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0 k8s.gcr.io/pause:3.1 k8s.gcr.io/etcd:3.3.10 k8s.gcr.io/coredns:1.3.1 quay.io/coreos/flannel:v0.11.0-amd64  
[root@localhost ~]# docker image load -i kube.tar 
[root@localhost sysctl.d]# docker image lsREPOSITORY                           TAG                 IMAGE ID            CREATED             SIZEk8s.gcr.io/kube-proxy                v1.15.0             d235b23c3570        3 weeks ago         82.4MBk8s.gcr.io/kube-apiserver            v1.15.0             201c7a840312        3 weeks ago         207MBk8s.gcr.io/kube-scheduler            v1.15.0             2d3813851e87        3 weeks ago         81.1MBk8s.gcr.io/kube-controller-manager   v1.15.0             8328bb49b652        3 weeks ago         159MBquay.io/coreos/flannel               v0.11.0-amd64       ff281650a721        5 months ago        52.6MBk8s.gcr.io/coredns                   1.3.1               eb516548c180        5 months ago        40.3MBk8s.gcr.io/etcd                      3.3.10              2c4adeb21b4f        7 months ago        258MBk8s.gcr.io/pause                     3.1                 da86e6ba6ca1        18 months ago       742kB[root@localhost sysctl.d]#

 

#通过在master节点生成的token 将node 加入集群 #在加入集群是发生了报错,--token 1thhr1.6t0yv35khnamf6zz 失效了 #kubeadm token list 列出token ,kubeadm token create 生成新的token ,token 有效时间24小时

  [root@localhost ~]# kubeadm token list

  TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
  1thhr1.6t0yv35khnamf6zz 23h 2019-07-12T16:43:46+08:00 authentication,signing <none> system:bootstrappers:kubeadm:default-node-token
  wxgx55.vjdl3ampsahtlkl3 <invalid> 2019-07-11T15:38:46+08:00 authentication,signing The default bootstrap token generated by 'kubeadm init'. system:bootstrappers:kubeadm:def  ault-node-token
  [root@localhost ~]#

 
[root@localhost sysctl.d]# kubeadm join 10.2.61.21:6443 --token 1thhr1.6t0yv35khnamf6zz     --discovery-token-ca-cert-hash sha256:caf8238dcdcbc374eb304612a08f13d296186cbe01e3941d3c919d97a7820809[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Activating the kubelet service[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@localhost sysctl.d]#

 

如果想在 node 执行 kubectl 需要将master 节点的 /root/.kube/config 复制到本机的相同位置

#node 添加成功[root@localhost ~]# kubectl get nodesNAME                    STATUS   ROLES    AGE   VERSIONkube.node2              Ready    
26m v1.15.0localhost.localdomain Ready master 25h v1.15.0[root@localhost ~]#

  

由于上述配置中主机名没有指定,因此都重新进行了构建

 

kubeadm join 10.2.61.21:6443 --token 8gak7a.ncl9l1kvjzyqgar2 \    --discovery-token-ca-cert-hash sha256:f30655d78e55a6efe0702d19af1f247e78d5a63586a913a614084b9af048f5d0

 

 

    

 

root@kube sysctl.d]# kubectl get nodesNAME          STATUS   ROLES    AGE   VERSIONkube.master   Ready    master   22h   v1.15.0kube.node1    Ready    
57m v1.15.0kube.node2 Ready
22h v1.15.0[root@kube sysctl.d]#

 

  

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

  

  

 

转载于:https://www.cnblogs.com/zy09/p/11134004.html

你可能感兴趣的文章
如果没有按照正常的先装iis后装.net的顺序,可以使用此命令重新注册一下:
查看>>
java.sql.Timestamp cannot be cast to java.sql.Date
查看>>
JS代码大全-2
查看>>
linux install ftp server
查看>>
C# 使用 Abot 实现 爬虫 抓取网页信息 源码下载
查看>>
嵌入式软件设计第8次实验报告
查看>>
NP难问题求解综述
查看>>
算法和数据结构(三)
查看>>
看一下你在中国属于哪个阶层?
查看>>
在iOS 8中使用UIAlertController
查看>>
js获取ip地址,操作系统,浏览器版本等信息,可兼容
查看>>
Ubuntu下的eclipse安装subclipse遇到没有javahl的问题...(2天解决了)
查看>>
Cadence Allegro 如何关闭铺铜(覆铜)shape的显示和设置shape显示模式–allegro小技巧...
查看>>
Atcoder Grand Contest 004 题解
查看>>
MFC中 给对话框添加背景图片
查看>>
alter database databasename set single_user with rollback IMMEDIATE 不成功问题
查看>>
idea 系列破解
查看>>
Repeater + Resources 列表 [原创][分享]
查看>>
c# Resolve SQlite Concurrency Exception Problem (Using Read-Write Lock)
查看>>
dependency injection
查看>>