#原创内容,转载请注明出处
博主地址:https://aronligithub.github.io/
前言
在经过上一篇章关于kubernetes 基本技术概述铺垫,在部署etcd集群之后,就可以开始部署kubernetes的集群服务了。
如果你是直接访问到该篇章,不清楚etcd如何部署,不清楚我写的kubernetes系列文章铺垫,可以访问这里。
部署基本步骤说明
- 下载kubernetes二进制可执行文件
- 使用openssl生成ca证书
- 部署kubernetes的master服务
- 部署kubernetes的node服务
环境准备
服务器拓扑
host name | ServerIP | Services |
---|---|---|
Server81 | 172.16.5.81 | master 、node 、etcd |
Server86 | 172.16.5.86 | node 、etcd |
Server87 | 172.16.5.87 | node 、etcd |
服务器预处理配置
- 关闭防火墙服务
1
2
3 systemctl stop firewalld
systemctl disable firewalld
setenforce 0
- 关闭selinux
1
2
3
4
5
6 查看SELinux状态:
1、/usr/sbin/sestatus -v 或者sestatus
2、修改配置文件需要重启机器:
修改/etc/selinux/config 文件
将SELINUX=enforcing改为SELINUX=disabled
重启机器即可
- 配置服务器的ntp时间钟(保证服务器之间的时间同步)
1
2
3
4
5
6 yum install ntp ntpdate -y
timedatectl status
timedatectl list-timezones | grep Shanghai
timedatectl set-timezone Asia/Hong_Kong
timedatectl set-ntp yes
date
- 关闭硬盘的swap分区
1
2
3
4 关闭swap
sudo swapoff -a
#要永久禁掉swap分区,打开如下文件注释掉swap那一行
sudo vi /etc/fstab
k8s1.11下载二进制文件
从Kubernetes官网Github下载编译好的二进制包
访问kubernetes的Github,查看页面如下:
1
2 下载kubernetes.tar.gz文件,包含了Kubernetes的服务程序文件、文档和示例。
'注意:现在下载都需要翻墙才可以下载了。(不翻墙的话印象中后面也可以下载,不过很慢)'
解压二进制文件以及下载server以及client执行文件
- 上传并解压二进制文件压缩包
2.下载client和server的二进制文件
1
2 从kubernetes/client的介绍文件中可以知道,需要去执行
Run cluster/get-kube-binaries.sh to download client and server binaries.
3.查看下载好的server文件
好了,这里已经下载好kubernetes所需的二进制文件了,那么下一步就是创建kubernetes集群所需要的TLS证书文件。
使用openssl创建CA证书
部署kubernetes服务使用的所需证书如下
名称 | 公钥与私钥 |
---|---|
根证书公钥与私钥 | ca.pem与ca.key |
API Server公钥与私钥 | apiserver.pem与apiserver.key |
集群管理员公钥与私钥 | admin.pem与admin.key |
节点proxy公钥与私钥 | proxy.pem与proxy.key |
节点kubelet的公钥与私钥:是通过boostrap响应的方式,在启动kubelet自动会产生, 然后在master通过csr请求,就会产生。
那么知道这些基本概念之后,下面就开始创建证书的步骤说明。
再次之前可以先看看生成之后的结果图:
创建根证书
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18 # Generate the root CA.
#生成RSA私钥(无加密)
openssl genrsa -out ca.key 2048
#生成 RSA 私钥和自签名证书
openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.pem -subj "/CN=kubernetes/O=k8s"
# 参数说明:
-new 指生成证书请求
-x509 表示直接输出证书
-key 指定私钥文件
-days 指定证书过期时间为10000天
-out 导出结束后证书文件
-subj 输入证书拥有者信息,这里指定 CN 以及 O 的值
# 重要的CN以及0关键参数:
-subj 设置CN以及0的值很重要,kubernetes会从证书这两个值对应获取相关的用户名以及用户租的值,如下:
"CN":Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name);浏览器使用该字段验证网站是否合法;
"O":Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group);
apiserver证书生成
master中需要证书如下:
根证书公钥(root CA public key, ca.key
)、根证书(ca.pem
);
apiserver证书:apiserver.pem
与其私钥apiserver-key.pem
。
1.创建openssl.cnf
openssl示例1
2
3
4
5
6
7
8
9
10
11
12
13
14
15[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
IP.1 = ${K8S_SERVICE_IP}
IP.2 = ${MASTER_IPV4}
1 | [^_^]: |
根据上面的示例,下面则以server81作为master服务器,创建openssl的cnf文件。
创建openssl.cnf文件1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21[root@server81 openssl]# vim openssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
DNS.6 = k8s_master
IP.1 = 10.0.6.1 # ClusterServiceIP 地址
IP.2 = 172.16.5.81 # master IP地址
IP.3 = 10.1.0.1 # docker IP地址
IP.4 = 10.0.6.200 # kubernetes DNS IP地址
2.生成apiserver 证书对
1
2
3
4
5
6 # Generate the API server keypair.
openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -out apiserver.csr -subj "/CN=kubernetes/O=k8s" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out apiserver.pem -days 3650 -extensions v3_req -extfile openssl.cnf
一般生成的根证书(ca.key, ca.pem
)与apiserver证书(apiserver.key,apiserver.pem
)放置在Master节点的/etc/kubernetes/kubernetesTLS/
路径下(这个路径是可以自定义修改的,不一定要用我这个)
3.证书配置相关说明
apiserver的配置中需要指定如下参数:1
2
3
4
5
6
7
8
9
10
11
12## Kubernetes的访问证书配置:
--token-auth-file=/etc/kubernetes/token.csv
--tls-cert-file=/etc/kubernetes/kubernetesTLS/apiserver.pem
--tls-private-key-file=/etc/kubernetes/kubernetesTLS/apiserver.key
--client-ca-file=/etc/kubernetes/kubernetesTLS/ca.pem
--service-account-key-file=/etc/kubernetes/kubernetesTLS/ca.key
## Etcd的访问证书配置:
--storage-backend=etcd3
--etcd-cafile=/etc/etcd/etcdSSL/ca.pem
--etcd-certfile=/etc/etcd/etcdSSL/etcd.pem
--etcd-keyfile=/etc/etcd/etcdSSL/etcd-key.pem
controller-manager的配置中需要指定如下参数:1
2
3
4
5
6## Kubernetes的访问证书配置:
--cluster-name=kubernetes
--cluster-signing-cert-file=/etc/kubernetes/kubernetesTLS/ca.pem
--cluster-signing-key-file=/etc/kubernetes/kubernetesTLS/ca.key
--service-account-private-key-file=/etc/kubernetes/kubernetesTLS/ca.key
--root-ca-file=/etc/kubernetes/kubernetesTLS/ca.pem
admin集群管理员证书生成
1
2
3
4 ## 此证书用于kubectl,设置方式如下:
openssl genrsa -out admin.key 2048
openssl req -new -key admin.key -out admin.csr -subj "/CN=admin/O=system:masters/OU=System"
openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out admin.pem -days 3650
说明:
由于后续 kube-apiserver 在启用RBAC模式
之后, 客户端(如 kubelet、kube-proxy、Pod
)请求进行授权的时候会需要认证用户名
、以及用户组
;
那么所谓的用户名
和用户组
从哪里来定义呢?
我们来看看上面openssl创建证书的语句:1
openssl req -new -key admin.key -out admin.csr -subj "/CN=admin/O=system:masters/OU=System"
其中这里的/CN=admin/O=system:masters/OU=System
就是在CN
定义用户为admin
,O
定义用户组为system:masters
,OU
指定该证书的 Group 为 system:masters
。
那么定义好之后,在kubernetes中是怎么使用的呢?
kube-apiserver
预定义了一些 RBAC
使用的 RoleBindings(角色)
,如 cluster-admin (角色)
将 Group(组) system:masters
与 Role(角色) cluster-admin
绑定,该 Role
授予了调用kube-apiserver
的所有 API的权限;
那么当然的,我们创建admin
的证书的时候,就要按照该上面的说明定义好证书的组、用户
。
另外当kubelet使用该证书访问kube-apiserver是什么样的过程呢?
在证书的签名中,OU
指定该证书的 Group 为 system:masters
,kubelet
使用该证书访问 kube-apiserver
时 ,由于证书被 CA 签名
,所以认证通过
,同时由于证书用户组
为经过预授权的 system:masters
,所以被授予访问所有 API 的权限
;
同理,如果你是使用CFSSL
来签名证书也需要这样去配置好用户和用户组。在这里就不单独写CFSSL签kubernetes
的相关证书了。
重要的是要好好理解证书签名
与kubernetes
的RBAC
角色绑定的关系。
节点proxy证书生成
1
2
3 openssl genrsa -out proxy.key 2048
openssl req -new -key proxy.key -out proxy.csr -subj "/CN=system:kube-proxy"
openssl x509 -req -in proxy.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out proxy.pem -days 3650
说明:
从上面解析说明admin的CN签名与kubernetes角色绑定的关系中,这里简单一眼就看出CN是拿来定义proxy的用户的。
CN
指定该证书的请求 User(用户)
为 system:kube-proxy
;
在kubernetes
的RABC
默认角色绑定中,kube-apiserver
预定义的 RoleBinding cluster-admin
将User system:kube-proxy
与 Role system:node-proxier
绑定,该 Role
授予了调用 kube-apiserver Proxy
相关 API
的权限;
将生成的ca证书拷贝至准备部署的指定目录
以上就是部署master节点所需要的证书文件了。
在这个过程CA产生的过程,大家肯定会角色笔者为什么要这么啰嗦详详细细去写那么多注释和说明。而且看了那么多内容之后,内心肯定觉得步骤好多呀,好烦躁。
不着急,步骤说明详细可以让读者的你更加好去理解;步骤多而烦躁我已经写好了自动化签订证书的脚本了。
在这里附上源码:
- 第一步,创建openssl的cnf文件
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38 [root@server81 openssl]# cat create_openssl_cnf.sh
#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
################## Set PARAMS ######################
MASTER_IP=`python -c "import socket;print([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])"`
DockerServiceIP="10.1.0.1" ## 10.1.0.0/16
ClusterServiceIP="10.0.6.1" ## 10.0.6.0/24
kubeDnsIP="10.0.6.200"
## function
function create_openssl_cnf(){
cat <<EOF > $basedir/openssl.cnf
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster
DNS.5 = kubernetes.default.svc.cluster.local
DNS.6 = k8s_master
IP.1 = $ClusterServiceIP # ClusterServiceIP 地址
IP.2 = $MASTER_IP # master IP地址
IP.3 = $DockerServiceIP # docker IP地址
IP.4 = $kubeDnsIP # kubernetes DNS IP地址
EOF
}
create_openssl_cnf
[root@server81 openssl]#
- 第二步,创建master所需的TLS证书
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94 [root@server81 install_k8s_master]# ls
configDir Step1_create_CA.sh Step2_create_token.sh Step4_install_controller.sh Step6_create_kubeconfig_file.sh
Implement.sh Step1_file Step3_install_apiserver.sh Step5_install_scheduler.sh Step7_set_master_info.sh
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# vim Step1_create_CA.sh
[root@server81 install_k8s_master]# cat Step1_create_CA.sh
#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
configdir=$basedir/Step1_file
openssldir=$configdir/openssl
ssldir=$configdir/kubernetesTLS
kubernetsDir=/etc/kubernetes
kubernetsTLSDir=/etc/kubernetes/kubernetesTLS
################## Set PARAMS ######################
MASTER_IP=`python -c "import socket;print([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])"`
## function and implments
function check_firewalld_selinux(){
systemctl status firewalld
/usr/sbin/sestatus -v
swapoff -a
}
check_firewalld_selinux
function create_ssl(){
cd $configdir && rm -rf $ssldir && mkdir -p $ssldir
cd $ssldir && \
# Generate the root CA.
openssl genrsa -out ca.key 2048
openssl req -x509 -new -nodes -key ca.key -days 10000 -out ca.pem -subj "/CN=kubernetes/O=k8s"
ls $ssldir
}
create_ssl
function create_openssl_cnf(){
sh $openssldir/create_openssl_cnf.sh
cat $openssldir/openssl.cnf > $ssldir/openssl.cnf
}
create_openssl_cnf
function create_apiserver_key_pem(){
cd $ssldir && \
openssl genrsa -out apiserver.key 2048
openssl req -new -key apiserver.key -out apiserver.csr -subj "/CN=kubernetes/O=k8s" -config openssl.cnf
openssl x509 -req -in apiserver.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out apiserver.pem -days 3650 -extensions v3_req -extfile openssl.cnf
ls $ssldir
}
create_apiserver_key_pem
function create_admin_key_pem(){
cd $ssldir && \
openssl genrsa -out admin.key 2048
openssl req -new -key admin.key -out admin.csr -subj "/CN=admin/O=system:masters/OU=System"
openssl x509 -req -in admin.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out admin.pem -days 3650
ls $ssldir
}
create_admin_key_pem
function create_proxy_key_pem(){
cd $ssldir && \
openssl genrsa -out proxy.key 2048
openssl req -new -key proxy.key -out proxy.csr -subj "/CN=system:kube-proxy"
openssl x509 -req -in proxy.csr -CA ca.pem -CAkey ca.key -CAcreateserial -out proxy.pem -days 3650
ls $ssldir
}
create_proxy_key_pem
function setup_ca(){
rm -rf $kubernetsDir
mkdir -p $kubernetsTLSDir
cat $ssldir/ca.pem > $kubernetsTLSDir/ca.pem
cat $ssldir/ca.key > $kubernetsTLSDir/ca.key
cat $ssldir/apiserver.pem > $kubernetsTLSDir/apiserver.pem
cat $ssldir/apiserver.key > $kubernetsTLSDir/apiserver.key
cat $ssldir/admin.pem > $kubernetsTLSDir/admin.pem
cat $ssldir/admin.key > $kubernetsTLSDir/admin.key
cat $ssldir/proxy.pem > $kubernetsTLSDir/proxy.pem
cat $ssldir/proxy.key > $kubernetsTLSDir/proxy.key
echo "checking TLS file:"
ls $kubernetsTLSDir
}
setup_ca
[root@server81 install_k8s_master]#
执行生成证书如下1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59[root@server81 install_k8s_master]# ./Step1_create_CA.sh
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
SELinux status: disabled
Generating RSA private key, 2048 bit long modulus
................................................+++
............................................................................+++
e is 65537 (0x10001)
ca.key ca.pem
Generating RSA private key, 2048 bit long modulus
.......................................................................................+++
.............+++
e is 65537 (0x10001)
Signature ok
subject=/CN=kubernetes/O=k8s
Getting CA Private Key
apiserver.csr apiserver.key apiserver.pem ca.key ca.pem ca.srl openssl.cnf
Generating RSA private key, 2048 bit long modulus
.......................................+++
...........+++
e is 65537 (0x10001)
Signature ok
subject=/CN=admin/O=system:masters/OU=System
Getting CA Private Key
admin.csr admin.key admin.pem apiserver.csr apiserver.key apiserver.pem ca.key ca.pem ca.srl openssl.cnf
Generating RSA private key, 2048 bit long modulus
...+++
..+++
e is 65537 (0x10001)
Signature ok
subject=/CN=system:kube-proxy
Getting CA Private Key
admin.csr admin.pem apiserver.key ca.key ca.srl proxy.csr proxy.pem
admin.key apiserver.csr apiserver.pem ca.pem openssl.cnf proxy.key
checking TLS file:
admin.key admin.pem apiserver.key apiserver.pem ca.key ca.pem proxy.key proxy.pem
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# ls
configDir Step1_create_CA.sh Step2_create_token.sh Step4_install_controller.sh Step6_create_kubeconfig_file.sh
Implement.sh Step1_file Step3_install_apiserver.sh Step5_install_scheduler.sh Step7_set_master_info.sh
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# ls /etc/kubernetes/
kubernetesTLS
[root@server81 install_k8s_master]# ls /etc/kubernetes/kubernetesTLS/
admin.key admin.pem apiserver.key apiserver.pem ca.key ca.pem proxy.key proxy.pem
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# ls -ll /etc/kubernetes/kubernetesTLS/
total 32
-rw-r--r-- 1 root root 1675 Aug 19 22:21 admin.key
-rw-r--r-- 1 root root 1050 Aug 19 22:21 admin.pem
-rw-r--r-- 1 root root 1675 Aug 19 22:21 apiserver.key
-rw-r--r-- 1 root root 1302 Aug 19 22:21 apiserver.pem
-rw-r--r-- 1 root root 1679 Aug 19 22:21 ca.key
-rw-r--r-- 1 root root 1135 Aug 19 22:21 ca.pem
-rw-r--r-- 1 root root 1679 Aug 19 22:21 proxy.key
-rw-r--r-- 1 root root 1009 Aug 19 22:21 proxy.pem
[root@server81 install_k8s_master]#
怎么样?有了这个脚本是不是感觉世界都美好了。只要理解清楚详细配置步骤,然后执行一下脚本,你就可以拥有更加多的咖啡时间了。
部署master
将master所需的二进制执行文件拷贝至/user/bin目录下
1
2
3
4
5
6
7
8
9
10
11
12 #!/bin/bash
basedir=$(cd `dirname $0`;pwd)
softwaredir=$basedir/../install_kubernetes_software
function copy_bin(){
cp -v $softwaredir/kube-apiserver $binDir
cp -v $softwaredir/kube-controller-manager $binDir
cp -v $softwaredir/kube-scheduler $binDir
cp -v $softwaredir/kubectl $binDir
}
copy_bin
API Server权限控制方式介绍
API Server权限控制分为三种:
Authentication(身份认证)、Authorization(授权)、AdmissionControl(准入控制)。
身份认证:
当客户端向Kubernetes非只读端口发起API请求时,Kubernetes通过三种方式来认证用户的合法性。kubernetes中,验证用户是否有权限操作api的方式有三种:证书认证,token认证,基本信息认证。
① 证书认证
设置apiserver的启动参数:--client_ca_file=SOMEFILE ,这个被引用的文件中包含的验证client的证书,如果被验证通过,那么这个验证记录中的主体对象将会作为请求的username。
② Token认证(本次使用token认证的方式
)
设置apiserver的启动参数:--token_auth_file=SOMEFILE。 token file的格式包含三列:token,username,userid。当使用token作为验证方式时,在对apiserver的http请求中,增加 一个Header字段:Authorization ,将它的值设置为:Bearer SOMETOKEN。
③ 基本信息认证
设置apiserver的启动参数:--basic_auth_file=SOMEFILE,如果更改了文件中的密码,只有重新启动apiserver使 其重新生效。其文件的基本格式包含三列:password,username,userid。当使用此作为认证方式时,在对apiserver的http 请求中,增加一个Header字段:Authorization,将它的值设置为: Basic BASE64ENCODEDUSER:PASSWORD。
,
创建 TLS Bootstrapping Token
Token auth file
Token可以是任意的包涵128 bit的字符串,可以使用安全的随机数发生器生成。
1 | #!/bin/bash |
后续将token.csv发到所有机器(Master 和 Node)的 /etc/kubernetes/ 目录。
创建admin用户的集群参数
在前面使用openssl创建TLS证书的时候已经对证书的用户以及组签名至证书之中,那么下一步就是定义admin用户在集群中的参数了。1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
## set param
MASTER_IP=`python -c "import socket;print([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])"`
KUBE_APISERVER="https://$MASTER_IP:6443"
# 设置集群参数
function config_cluster_param(){
kubectl config set-cluster kubernetes \
--certificate-authority=$kubernetesTLSDir/ca.pem \
--embed-certs=true \
--server=$KUBE_APISERVER
}
config_cluster_param
# 设置管理员认证参数
function config_admin_credentials(){
kubectl config set-credentials admin \
--client-certificate=$kubernetesTLSDir/admin.pem \
--client-key=$kubernetesTLSDir/admin.key \
--embed-certs=true
}
config_admin_credentials
# 设置管理员上下文参数
function config_admin_context(){
kubectl config set-context kubernetes --cluster=kubernetes --user=admin
}
config_admin_context
# 设置集群默认上下文参数
function config_default_context(){
kubectl config use-context kubernetes
}
config_default_context
值得注意的采用token认证
的方式,kubernetes
在后续是需要创建bootstrap.kubeconfig
的文件的,那么我们需要将admin
相关的TLS证书文件
写入这个bootstrap.kubeconfig
文件。
该如何将admin的TLS文件参数写入bootstrap.kubeconfig呢?
这时候就要借助这个--embed-certs
的参数了,当该参数为 true
时表示将 certificate-authority 证书
写入到生成的 bootstrap.kubeconfig
文件中。
在指定了参数之后,后续由 kube-apiserver
自动生成;
安装kube-apiserver
- 编写kube-apiserver.service(/usr/lib/systemd/system)
将kube-apiserver.service文件写入/usr/lib/systemd/system/中,后续用来启动二进制文件:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25[Unit]
Description=Kube-apiserver Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
[Service]
Type=notify
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_ETCD_SERVERS \
$KUBE_API_ADDRESS \
$KUBE_API_PORT \
$KUBELET_PORT \
$KUBE_ALLOW_PRIV \
$KUBE_SERVICE_ADDRESSES \
$KUBE_ADMISSION_CONTROL \
$KUBE_API_ARGS
Restart=always
LimitNOFILE=65536
[Install]
WantedBy=default.target
kube-apiserver.service参数说明
1 | EnvironmentFile=-/etc/kubernetes/config |
说明:定义apiserver加载的两个配置文件
1 | ExecStart=/usr/bin/kube-apiserver \ |
说明:定义二进制可执行文件启用的文件路径/usr/bin/kube-apiserver
,并且设置多个启用参数的变量。这些变量都是从配置文件中获取的。
2.编写config配置文件(/etc/kubernetes)
config配置文件
是提供apiserver、controller-manager、scheduler
服务读取kubernetes
相关通用参数配置的。
将config配置文件
写入/etc/kubernetes
目录下,当然这个/etc/kubernetes
也是可以自定义的,需要改动的话,注意要在service
的环境变量文件填写处修改即可。1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28[root@server81 kubernetes]# vim config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal
# 表示错误日志记录到文件还是输出到stderr。
KUBE_LOGTOSTDERR="--logtostderr=true"
# journal message level, 0 is debug
# 日志等级。设置0则是debug等级
KUBE_LOG_LEVEL="--v=0"
# Should this cluster be allowed to run privileged docker containers
# 允许运行特权容器。
KUBE_ALLOW_PRIV="--allow-privileged=true"
# How the controller-manager, scheduler, and proxy find the apiserver
# 设置master服务器的访问
KUBE_MASTER="--master=http://172.16.5.81:8080"
- 编写apiserver配置文件(/etc/kubernetes)
apiserver配置文件
是单独提供apiserver服务
读取相关参数的。
将apiserver配置文件
写入/etc/kubernetes
目录下。1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26###
## kubernetes system config
##
## The following values are used to configure the kube-apiserver
##
#
## The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=172.16.5.81 --bind-address=172.16.5.81 --insecure-bind-address=172.16.5.81"
#
## The port on the local server to listen on.
#KUBE_API_PORT="--port=8080"
#
## Port minions listen on
#KUBELET_PORT="--kubelet-port=10250"
#
## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=https://172.16.5.81:2379,https://172.16.5.86:2379,https://172.16.5.87:2379"
#
## Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.0.6.0/24"
#
## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota,NodeRestriction"
## Add your own!
KUBE_API_ARGS="--authorization-mode=Node,RBAC --runtime-config=rbac.authorization.k8s.io/v1beta1 --kubelet-https=true --token-auth-file=/etc/kubernetes/token.csv --service-node-port-range=30000-32767 --tls-cert-file=/etc/kubernetes/kubernetesTLS/apiserver.pem --tls-private-key-file=/etc/kubernetes/kubernetesTLS/apiserver.key --client-ca-file=/etc/kubernetes/kubernetesTLS/ca.pem --service-account-key-file=/etc/kubernetes/kubernetesTLS/ca.key --storage-backend=etcd3 --etcd-cafile=/etc/etcd/etcdSSL/ca.pem --etcd-certfile=/etc/etcd/etcdSSL/etcd.pem --etcd-keyfile=/etc/etcd/etcdSSL/etcd-key.pem --enable-swagger-ui=true --apiserver-count=3 --audit-log-maxage=30 --audit-log-maxbackup=3 --audit-log-maxsize=100 --audit-log-path=/var/lib/audit.log --event-ttl=1h"
配置文件相关参数说明如下:
MASTER IP地址以及节点IP地址的绑定
1
2 ## The address on the local server to listen to.
KUBE_API_ADDRESS="--advertise-address=$MASTER_IP --bind-address=$MASTER_IP --insecure-bind-address=$MASTER_IP"
说明:MASTER_IP就是填写安装master节点服务的IP地址,示例:
–advertise-address=172.16.5.81 –bind-address=172.16.5.81 –insecure-bind-address=172.16.5.81
etcd集群的endpoint访问地址
1
2 ## Comma separated list of nodes in the etcd cluster
KUBE_ETCD_SERVERS="--etcd-servers=$ETCD_ENDPOINT"
说明:ETCD_ENDPOINT访问etcd集群的方式,示例:
–etcd-servers=https://172.16.5.81:2379,https://172.16.5.86:2379,https://172.16.5.87:2379
如果是单台etcd的话,那么一个的单台IP即可,示例:
–etcd-servers=https://172.16.5.81:2379
kubernetes中service定义的虚拟网段
kubernetes主要分为pods的IP网段、service的IP网段,这里定义的是service的虚拟IP网段。
1 | ## Address range to use for services |
配置kubernetes的认证控制启动插件
1
2 ## default admission control policies
KUBE_ADMISSION_CONTROL="--admission-control=ServiceAccount,NamespaceLifecycle,NamespaceExists,LimitRanger,ResourceQuota,NodeRestriction"
配置多个自定义参数
1 | ## Add your own! |
参数 | 说明 |
---|---|
–authorization-mode=Node,RBAC | 启用Node RBAC插件 |
–runtime-config=rbac.authorization.k8s.io/v1beta1 | 运行的rabc配置文件 |
–kubelet-https=true | 启用https |
–token-auth-file=$kubernetesDir/token.csv | 指定生成token文件 |
–service-node-port-range=30000-32767 | 设置node port端口号范围30000~32767 |
–tls-cert-file=$kubernetesTLSDir/apiserver.pem | 指定apiserver的tls公钥证书 |
–tls-private-key-file=$kubernetesTLSDir/apiserver.key | 指定apiserver的tls私钥证书 |
–client-ca-file=$kubernetesTLSDir/ca.pem | 指定TLS证书的ca根证书公钥 |
–service-account-key-file=$kubernetesTLSDir/ca.key | 指定apiserver的tls证书 |
–storage-backend=etcd3 | 指定etcd存储为version 3系列 |
–etcd-cafile=$etcdCaPem | 指定etcd访问的ca根证书公钥 |
–etcd-certfile=$etcdPem | 指定etcd访问的TLS证书公钥 |
–etcd-keyfile=$etcdKeyPem | 指定etcd访问的TLS证书私钥 |
–enable-swagger-ui=true | 启用 swagger-ui 功能,Kubernetes使用了swagger-ui提供API在线查询功能 |
–apiserver-count=3 | 设置集群中运行的API Sever数量,这种使用单个也没关系 |
–event-ttl=1h | API Server 对于各种审计时间保存1小时 |
到此,关于apiserver的service以及配置文件基本说明清楚了。还有疑问的就给我留言吧。
3.启动apiserver
1
2
3
4 systemctl daemon-reload
systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl status kube-apiserver
执行如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24[root@server81 install_kubernetes]# systemctl daemon-reload
[root@server81 install_kubernetes]# systemctl enable kube-apiserver
[root@server81 install_kubernetes]# systemctl start kube-apiserver
[root@server81 install_kubernetes]# systemctl status kube-apiserver
● kube-apiserver.service - Kube-apiserver Service
Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)
Active: active (running) since Sun 2018-08-19 22:57:48 HKT; 11h ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 1688 (kube-apiserver)
CGroup: /system.slice/kube-apiserver.service
└─1688 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=https://172.16.5.81:2379,https://172.16.5.86:2379,...
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.415631 1688 storage_rbac.go:246] created role.rbac.authorizat...public
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.448673 1688 controller.go:597] quota admission added evaluato...dings}
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.454356 1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.496380 1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.534031 1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.579370 1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.612662 1688 storage_rbac.go:276] created rolebinding.rbac.aut...system
Aug 19 22:57:51 server81 kube-apiserver[1688]: I0819 22:57:51.652351 1688 storage_rbac.go:276] created rolebinding.rbac.aut...public
Aug 20 01:00:00 server81 kube-apiserver[1688]: I0820 01:00:00.330487 1688 trace.go:76] Trace[864267216]: "GuaranteedUpdate ...75ms):
Aug 20 01:00:00 server81 kube-apiserver[1688]: Trace[864267216]: [683.232535ms] [674.763984ms] Transaction prepared
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 install_kubernetes]#
安装kube-controller-manager
1. 编写kube-controller-manager.service(/usr/lib/systemd/system)
将kube-controller-manager.service
写入/usr/lib/systemd/system
目录下,提供二进制文件的service
启动文件。1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22[root@server81 install_k8s_master]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kube-controller-manager Service
Documentation=https://github.com/GoogleCloudPlatform/kubernetes
After=network.target
After=kube-apiserver.service
Requires=kube-apiserver.service
[Service]
Type=simple
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_CONTROLLER_MANAGER_ARGS
Restart=always
LimitNOFILE=65536
[Install]
WantedBy=default.target
[root@server81 install_k8s_master]#
kube-controller-manager.service的参数说明
1
2 EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/controller-manager
说明:定义kube-controller-manager.service
启用的环境变量配置文件
1 | ExecStart=/usr/bin/kube-controller-manager \ |
说明:定义service启用的二进制可执行文件的路径(/usr/bin/kube-controller-manager
),以及启动该go
服务后面多个flag
参数,当然这些参数都是从配置文件中读取的。
2.配置文件controller-manager(/etc/kubernetes)
将controller-manager
文件写入/etc/kubernetes
目录下。1
2
3
4
5
6
7
8
9
10
11[root@server81 install_k8s_master]# cat /etc/kubernetes/
apiserver config controller-manager kubernetesTLS/ token.csv
[root@server81 install_k8s_master]# cat /etc/kubernetes/controller-manager
###
# The following values are used to configure the kubernetes controller-manager
# defaults from config and apiserver should be adequate
# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="--master=http://172.16.5.81:8080 --address=127.0.0.1 --service-cluster-ip-range=10.0.6.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/kubernetesTLS/ca.pem --cluster-signing-key-file=/etc/kubernetes/kubernetesTLS/ca.key --service-account-private-key-file=/etc/kubernetes/kubernetesTLS/ca.key --root-ca-file=/etc/kubernetes/kubernetesTLS/ca.pem --leader-elect=true --cluster-cidr=10.1.0.0/16"
[root@server81 install_k8s_master]#
controller-manager的参数说明
参数 | 说明 |
---|---|
–master=http://172.16.5.81:8080 | 配置master访问地址 |
–address=127.0.0.1 | 配置监听本地IP地址,address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器 |
–service-cluster-ip-range=10.0.6.0/24 | 设置kubernetes的service的网段 |
–cluster-name=kubernetes | 设置集群的域名为kubernetes |
–cluster-signing-cert-file=$kubernetesTLSDir/ca.pem | 设置集群签署TLS的ca根证书公钥 。指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥; |
–cluster-signing-key-file=$kubernetesTLSDir/ca.key | 设置集群签署TLS的ca根证书私钥 ;指定的证书和私钥文件用来签名为 TLS BootStrap 创建的证书和私钥; |
–service-account-private-key-file=$kubernetesTLSDir/ca.key | 设置集群安全账号签署TLS的ca根证书私钥 |
–root-ca-file=$kubernetesTLSDir/ca.pem | 设置集群root用户签署TLS的ca根证书公钥;用来对 kube-apiserver 证书进行校验,指定该参数后,才会在Pod 容器的 ServiceAccount 中放置该 CA 证书文件; |
–leader-elect=true | 设置启动选举,但是目前只启动一个,也没地方要选择,主要在于API Sever有多个的时候 |
–cluster-cidr=$podClusterIP | 设置集群pod的IP网段 |
- controller-manager启动服务
1
2
3
4 systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl status kube-controller-manager
运行结果如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24[root@server81 conf]# systemctl daemon-reload
[root@server81 conf]# systemctl enable kube-controller-manager
[root@server81 conf]# systemctl start kube-controller-manager
[root@server81 conf]# systemctl status kube-controller-manager
● kube-controller-manager.service - Kube-controller-manager Service
Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 10:22:37 HKT; 33min ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 2246 (kube-controller)
CGroup: /system.slice/kube-controller-manager.service
└─2246 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://172.16.5.81:8080 --master=http://172.16....
Aug 20 10:22:37 server81 kube-controller-manager[2246]: I0820 10:22:37.577898 2246 controller_utils.go:1032] Caches are sync...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.548284 2246 controller_utils.go:1025] Waiting for cac...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.568248 2246 controller_utils.go:1025] Waiting for cac...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.595675 2246 controller_utils.go:1032] Caches are sync...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.595716 2246 garbagecollector.go:142] Garbage collecto...rbage
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.650186 2246 controller_utils.go:1032] Caches are sync...oller
Aug 20 10:22:38 server81 kube-controller-manager[2246]: I0820 10:22:38.668935 2246 controller_utils.go:1032] Caches are sync...oller
Aug 20 10:29:56 server81 kube-controller-manager[2246]: W0820 10:29:56.356490 2246 reflector.go:341] k8s.io/kubernetes/vendo... old.
Aug 20 10:39:47 server81 kube-controller-manager[2246]: W0820 10:39:47.125097 2246 reflector.go:341] k8s.io/kubernetes/vendo... old.
Aug 20 10:51:45 server81 kube-controller-manager[2246]: W0820 10:51:45.878609 2246 reflector.go:341] k8s.io/kubernetes/vendo... old.
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 conf]#
##安装kube-scheduler
1.编写kube-scheduler.service(/usr/lib/systemd/system)
将kube-scheduler.service
写入/usr/lib/systemd/system
目录下1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21[root@server81 install_k8s_master]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kube-scheduler Service
After=network.target
[Service]
Type=simple
EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler \
$KUBE_LOGTOSTDERR \
$KUBE_LOG_LEVEL \
$KUBE_MASTER \
$KUBE_SCHEDULER_ARGS
Restart=always
LimitNOFILE=65536
[Install]
WantedBy=default.target
[root@server81 install_k8s_master]#
kube-scheduler.service参数说明
1 | EnvironmentFile=-/etc/kubernetes/config |
说明:定义配置两个服务启用读取的配置文件
1 | ExecStart=/usr/bin/kube-scheduler \ |
说明:定义启用的二进制可执行文件的路径(/usr/bin/kube-scheduler
)以及启用相关参数。
2.配置文件scheduler(/etc/kubernetes)
将scheduler
文件写入/etc/kubernetes
目录下。1
2
3
4
5
6
7
8
9
10
11[root@server81 install_k8s_master]# cat /etc/kubernetes/
apiserver config controller-manager kubernetesTLS/ scheduler token.csv
[root@server81 install_k8s_master]# cat /etc/kubernetes/scheduler
###
# The following values are used to configure the kubernetes scheduler
# defaults from config and scheduler should be adequate
# Add your own!
KUBE_SCHEDULER_ARGS="--master=http://172.16.5.81:8080 --leader-elect=true --address=127.0.0.1"
[root@server81 install_k8s_master]#
scheduler的参数说明
参数 | 说明 |
---|---|
–master=http://172.16.5.81:8080 | 定义配置master的apiserver访问地址 |
–leader-elect=true | 设置启动选举,但是目前只启动一个,也没地方要选择,主要在于API Sever有多个的时候 |
–address=127.0.0.1 | 配置监听本地IP地址,address 值必须为 127.0.0.1,因为当前 kube-apiserver 期望 scheduler 和 controller-manager 在同一台机器 |
3.启用scheduler服务
1 | systemctl daemon-reload |
运行结果如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20[root@server81 install_k8s_master]# systemctl daemon-reload
[root@server81 install_k8s_master]# systemctl enable kube-scheduler
[root@server81 install_k8s_master]# systemctl restart kube-scheduler
[root@server81 install_k8s_master]# systemctl status kube-scheduler
● kube-scheduler.service - Kube-scheduler Service
Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 11:12:28 HKT; 686ms ago
Main PID: 2459 (kube-scheduler)
CGroup: /system.slice/kube-scheduler.service
└─2459 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://172.16.5.81:8080 --master=http://172.16.5.81:8080...
Aug 20 11:12:28 server81 systemd[1]: Started Kube-scheduler Service.
Aug 20 11:12:28 server81 systemd[1]: Starting Kube-scheduler Service...
Aug 20 11:12:28 server81 kube-scheduler[2459]: W0820 11:12:28.724918 2459 options.go:148] WARNING: all flags other than --c... ASAP.
Aug 20 11:12:28 server81 kube-scheduler[2459]: I0820 11:12:28.727302 2459 server.go:126] Version: v1.11.0
Aug 20 11:12:28 server81 kube-scheduler[2459]: W0820 11:12:28.728311 2459 authorization.go:47] Authorization is disabled
Aug 20 11:12:28 server81 kube-scheduler[2459]: W0820 11:12:28.728332 2459 authentication.go:55] Authentication is disabled
Aug 20 11:12:28 server81 kube-scheduler[2459]: I0820 11:12:28.728341 2459 insecure_serving.go:47] Serving healthz insecurel...:10251
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 install_k8s_master]#
执行到这里,master所需要的服务都已经安装完毕了,下面我们可以查看一下组件的情况:
1
2
3
4
5
6
7
8
9
10
11 [root@server81 install_k8s_master]# ls /etc/kubernetes/
apiserver config controller-manager kubernetesTLS scheduler token.csv
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
[root@server81 install_k8s_master]#
可以看出,各个组件包含etcd
都是正常运行着的。
那么下面我们就要创建为node
节点TLS认证服务的kube-proxy kubeconfig
、kubelet bootstrapping kubeconfig
文件了。
这两个文件主要就提供proxy
和kubelet
访问apiserver
的。
##创建 kube-proxy kubeconfig 文件以及相关集群参数
kube-proxy kubeconfig
文件是提供kube-proxy
用户请求apiserver
所有API
权限的集群参数的。
执行完以下命令之后,自动生成到/etc/kubernetes
目录下即可。
1 | #!/bin/bash |
创建 kubelet bootstrapping kubeconfig 文件以及相关集群参数
创建kubelet
响应式的kubeconfig
文件,用于提供apiserver
自动生成kubeconfig
文件、以及公钥私钥。
该文件创建之后,在node
节点kubelet
启用的时候,自动会创建三个文件,后续在部署node
部分的时候说明。1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31## 设置kubelet的集群参数
kubectl config set-cluster kubernetes \
--certificate-authority=$kubernetesTLSDir/ca.pem \
--embed-certs=true \
--server=https://$MASTER_IP:6443 \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
## 设置kubelet用户的参数
kubectl config set-credentials kubelet-bootstrap \
--token=$BOOTSTRAP_TOKEN \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
## 设置kubernetes集群中kubelet用户的默认上下文参数
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
## 设置kubelet用户的默认上下文参数
kubectl config use-context default \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
## 创建kubelet的RABC角色
kubectl create --insecure-skip-tls-verify clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
##参数说明:
#1、跳过tls安全认证直接创建kubelet-bootstrap角色
#2、设置集群角色:system:node-bootstrapper
#3、设置集群用户:kubelet-bootstrap
##自动化创建kube-proxy kubeconfig、kubelet bootstrapping kubeconfig 文件
在看到这里的读者肯定会角色指令很多,很麻烦。没关系,送上一段咖啡代码:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104[root@server81 install_k8s_master]# cat configDir/conf/BOOTSTRAP_TOKEN
4b395732894828d5a34737d83c334330
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# cat Step6_create_kubeconfig_file.sh
#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
serviceDir=/usr/lib/systemd/system
binDir=/usr/bin
kubernetesDir=/etc/kubernetes
kubernetesTLSDir=/etc/kubernetes/kubernetesTLS
configdir=$basedir/configDir
configServiceDir=$configdir/service
configConfDir=$configdir/conf
## set param
MASTER_IP=`python -c "import socket;print([(s.connect(('8.8.8.8', 53)), s.getsockname()[0], s.close()) for s in [socket.socket(socket.AF_INET, socket.SOCK_DGRAM)]][0][1])"`
BOOTSTRAP_TOKEN=`cat $configConfDir/BOOTSTRAP_TOKEN`
#echo $BOOTSTRAP_TOKEN
## function and implments
# set proxy
function create_proxy_kubeconfig(){
kubectl config set-cluster kubernetes \
--certificate-authority=$kubernetesTLSDir/ca.pem \
--embed-certs=true \
--server=https://$MASTER_IP:6443 \
--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
}
create_proxy_kubeconfig
function config_proxy_credentials(){
kubectl config set-credentials kube-proxy \
--client-certificate=$kubernetesTLSDir/proxy.pem \
--client-key=$kubernetesTLSDir/proxy.key \
--embed-certs=true \
--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
}
config_proxy_credentials
function config_proxy_context(){
kubectl config set-context default \
--cluster=kubernetes \
--user=kube-proxy \
--kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
}
config_proxy_context
function set_proxy_context(){
kubectl config use-context default --kubeconfig=$kubernetesDir/kube-proxy.kubeconfig
}
set_proxy_context
## set bootstrapping
function create_kubelet_bootstrapping_kubeconfig(){
kubectl config set-cluster kubernetes \
--certificate-authority=$kubernetesTLSDir/ca.pem \
--embed-certs=true \
--server=https://$MASTER_IP:6443 \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
}
create_kubelet_bootstrapping_kubeconfig
function config_kubelet_bootstrapping_credentials(){
kubectl config set-credentials kubelet-bootstrap \
--token=$BOOTSTRAP_TOKEN \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
}
config_kubelet_bootstrapping_credentials
function config_kubernetes_bootstrap_kubeconfig(){
kubectl config set-context default \
--cluster=kubernetes \
--user=kubelet-bootstrap \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
}
config_kubernetes_bootstrap_kubeconfig
function set_bootstrap_context(){
kubectl config use-context default \
--kubeconfig=$kubernetesDir/bootstrap.kubeconfig
}
set_bootstrap_context
## create rolebinding
function create_cluster_rolebinding(){
kubectl create --insecure-skip-tls-verify clusterrolebinding kubelet-bootstrap \
--clusterrole=system:node-bootstrapper \
--user=kubelet-bootstrap
}
create_cluster_rolebinding
[root@server81 install_k8s_master]#
执行结果如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17[root@server81 install_k8s_master]# ./Step6_create_kubeconfig_file.sh
Cluster "kubernetes" set.
User "kube-proxy" set.
Context "default" created.
Switched to context "default".
Cluster "kubernetes" set.
User "kubelet-bootstrap" set.
Context "default" created.
Switched to context "default".
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# ls /etc/kubernetes/
apiserver config kube-proxy.kubeconfig scheduler
bootstrap.kubeconfig controller-manager kubernetesTLS/ token.csv
[root@server81 install_k8s_master]# ls /etc/kubernetes/
apiserver bootstrap.kubeconfig config controller-manager kube-proxy.kubeconfig kubernetesTLS scheduler token.csv
[root@server81 install_k8s_master]#
查看生成的kube-proxy.kubeconfig的内容,如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21[root@server81 install_k8s_master]# cat /etc/kubernetes/kube-proxy.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHVENDQWdHZ0F3SUJBZ0lKQVAxbEpzOTFHbG9wTUEwR0NTcUdTSWIzRFFFQkN3VUFNQ014RXpBUkJnTlYKQkFNTUNtdDFZbVZ5Ym1WMFpYTXhEREFLQmdOVkJBb01BMnM0Y3pBZUZ3MHhPREE0TVRreE5ESXhORFJhRncwMApOakF4TURReE5ESXhORFJhTUNNeEV6QVJCZ05WQkFNTUNtdDFZbVZ5Ym1WMFpYTXhEREFLQmdOVkJBb01BMnM0CmN6Q0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5BejJxOUNsVWozZmNTY20wVTYKWnhrTVFCVVJzSFpFeUpIbXhMWUR1RmNzbGlyUjZxZHFSbExjM3Z1SnlVSHB3dUF5QzZxYzlaZE52clNCUkhOegpxUVFSREVuUENMQXQ0ZFVkUjh2NnQvOVhKbnJ0Y0k3My94U0RKNno2eFh3K2MvTy95c0NET3pQNkFDcmE5cHlPCmJpQ1ZRSEJ4eEI3bGxuM0ErUEFaRWEzOHZSNmhTSklzRndxVjAwKy9iNSt5K3FvVVdtNWFtcS83OWNIM2Zwd0kKNnRmUlZIeHAweXBKNi9TckYyZWVWVU1KVlJxZWtiNjBuZkJRUUNEZ2YyL3lSOGNxVDZlV3VDdmZnVEdCV01QSQpPSjVVM1VxekNMVGNpNHpDSFhaTUlra25EWVFuNFR6Qm05MitzTGhXMlpFZk5DOUxycFZYWHpzTm45alFzeTA3ClliOENBd0VBQWFOUU1FNHdIUVlEVlIwT0JCWUVGRWQ0bUxtN292MFdxL2FUTVJLUnlaaVVMOTFNTUI4R0ExVWQKSXdRWU1CYUFGRWQ0bUxtN292MFdxL2FUTVJLUnlaaVVMOTFNTUF3R0ExVWRFd1FGTUFNQkFmOHdEUVlKS29aSQpodmNOQVFFTEJRQURnZ0VCQUtNVGJXcng5WXJmSXByY3RHMThTanJCZHVTYkhLL05FRGcySHNCb1BrU2YwbE1TCmdGTnNzOGZURlliKzY3UWhmTnA1MjBodnk3M3JKU29OVkJweWpBWDR1SnRjVG9aZDdCZVhyUHdNVWVjNXRjQWoKSFdvY1dKaXNpck0vdFV4cUxLekdRdnFhVDhmQy9UUW5kTGUxTkJ0cEFQbjM5RzE5VFVialMvUTlKVE1qZVdMWAo0dU5MVExGUVUrYTAwTWMrMGVSWjdFYUVRSks2U0h1OUNuSEtNZnhIVC81UTdvbXBrZlBtTTZLT0VOVndaK0Q5Clh0ZzlIUmlrampFMGtsNHB3TmlHRnZQYVhuY0V5RDlwVW5vdWI0RGc2UHJ1MU9zTjYxakwyd2VneVY4WU1nUVEKWEdkVTIveExMcEh2cVlPVDNRay9mNWw5MHpackQvYm5vZGhxNS84PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://172.16.5.81:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kube-proxy
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kube-proxy
user:
client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN1ekNDQWFNQ0NRRFZDSG9rSldveEdEQU5CZ2txaGtpRzl3MEJBUXNGQURBak1STXdFUVlEVlFRRERBcHIKZFdKbGNtNWxkR1Z6TVF3d0NnWURWUVFLREFOck9ITXdIaGNOTVRnd09ERTVNVFF5TVRRMFdoY05Namd3T0RFMgpNVFF5TVRRMFdqQWNNUm93R0FZRFZRUUREQkZ6ZVhOMFpXMDZhM1ZpWlMxd2NtOTRlVENDQVNJd0RRWUpLb1pJCmh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTWpOVitwVGVFU2d6di9rcDZvQ3Z2T3NoUXFYS0t3RWFrTWEKcDRvNEdoZUZySzVUbW53eTc4YWpJdHM4b0Nyb3l2Q1lVR2VVcVJqaG1xSUdRWWJxWVFPTy9NZ21pZmdFMVFlego3RzNYKzJsQ25qRThOVnZBd011QXpYU0w4L3dkU1NEUTZDdGdvUkVCcFhTQUJWYStaMldXVy9VSm53ZFlFWHlGClh2N3ZERWRJZG1pUWNjWEtMcHRuMWFzV25nek1aVG9EMDVjMWxQSTlZZ1ZqMFVsNldWMkVMdHhxdGVqdXJHT2kKN3R0K3hRanY0ckdQZ01udTNqOEF1QTNLZXpSUFJ0TVA1RkF6SHZ4WVQ3RU0rRzVmU2JGWFY0ZVVMb0czS3pzWQo3eitDYlF1bnYyNmhXMFM5dWtZT0lNWnA4eVJtcHJ6cGxSVnh5d0dJUUw2ajhqdndkcXNDQXdFQUFUQU5CZ2txCmhraUc5dzBCQVFzRkFBT0NBUUVBQmNUazU0TUY5YnNpaDZaVXJiakh0MmFXR3VaTzZBODlZa3ZUL21VcTRoTHUKd2lUcHRKZWNJWEh5RkZYemVCSDJkUGZIZ1lldEMrQTJGS0dsZFJ1SHJuUW1iTWFkdjN6bGNjbEl2ald6dU1GUQpnenhUQUJ0dGVNYkYvL2M5cE9TL2ZmQS9OcVV0akVEUzlJVXZUTDdjUEs3Z0dMSzRrQWY2N2hPTERLb1NGT2ZjCnp0bEpXWkhPaEpGRjM0bkQySytXMmZzb0g4WFdTeDd1N3FmSHFFRkFNOW5BRjRyQjNZdUFHKzdIOUxMbmVaK1IKbHBTeThLNzBVZUdUVFpFdW5yMzJwMmJEZWxQN0tCTWsvbmUxV01PbzRnL01QUUhOTm5XZHlNeFJ6bHBOeTBregpOekVydVlhbHpINDVTVHIrNytCMkNhcS9sWDFTSWpENXBYVDhZMXRtSFE9PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBeU0xWDZsTjRSS0RPLytTbnFnSys4NnlGQ3Bjb3JBUnFReHFuaWpnYUY0V3NybE9hCmZETHZ4cU1pMnp5Z0t1aks4SmhRWjVTcEdPR2FvZ1pCaHVwaEE0Nzh5Q2FKK0FUVkI3UHNiZGY3YVVLZU1UdzEKVzhEQXk0RE5kSXZ6L0IxSklORG9LMkNoRVFHbGRJQUZWcjVuWlpaYjlRbWZCMWdSZklWZS91OE1SMGgyYUpCeAp4Y291bTJmVnF4YWVETXhsT2dQVGx6V1U4ajFpQldQUlNYcFpYWVF1M0dxMTZPNnNZNkx1MjM3RkNPL2lzWStBCnllN2VQd0M0RGNwN05FOUcwdy9rVURNZS9GaFBzUXo0Ymw5SnNWZFhoNVF1Z2Jjck94anZQNEp0QzZlL2JxRmIKUkwyNlJnNGd4bW56SkdhbXZPbVZGWEhMQVloQXZxUHlPL0IycXdJREFRQUJBb0lCQVFDeU5KcmJXT3laYTJXSgo4REZrVGorTkhnU01XNDQ2NjBncStaTEt0Zk5pQUw0NWovVEFXS3czU3p4NStSbmtPdWt3RU56NnNCSktCSjRwClFRZ1NaaHRtL3hVVHhEQVpycUFveitMNXNQNXNjalRXV1NxNW5SejgvZmhZZ0lRdHNRZmZXY2RTQjlXcHRCNVUKZi9FOUJJbmF2RkFyN1RmM1dvOWFSVHNEWUw4eTJtVjJrakNpMkd4S3U4K3BQWXN3ZUIrbGZjc1QyNlB3ODBsRgpXTmZVODRzdDE1SjBCNitRSmhEQnNDb3NpbGxrcFZnaDhPMzVNNmE3WjZlL3IrZnZuYjcycXd2MkdGQm0rNEpmCmRydVJtTHRLdHUxVGhzUGQ4YkQ2MXpTblMrSXoyUGxGWnk0RkY3cFhWU2RwbjVlSm00dkJMM3NOem9HWGlGUmIKOTAydFo5d1JBb0dCQVB6ZXZEZWhEYVBiZ1FLTU5hMFBzN2dlNDZIUkF6Rzl4RDh2RXk4dEVXcVVVY2c3Mndqawp6MGFvLzZvRkFDM0tkM3VkUmZXdmhrV2RrcE9CMXIzMml6Y29Ka3lOQmxDc2YxSDF2dVJDb0gwNTZwM3VCa3dHCjFsZjFWeDV0cjVHMU5laXdzQjdsTklDa2pPNTg2b3F6M3NNWmZMcHM1ZlMxeVZFUExrVmErL2N0QW9HQkFNdEoKbnhpQXNCMnZKaXRaTTdrTjZjTzJ1S0lwNHp0WjZDMFhBZmtuNnd5Zk9zd3lyRHdNUnA2Yk56OTNCZzk0azE4aQpIdlJ3YzJPVVBkeXVrU2YyVGZVbXN6L0h1OWY0emRCdFdYM2lkOE50b29MYUd6RnVVN3hObVlrUWJaL2Y1ZmpNCmtpZzlVZVJYdng5THJTa3RDdEdyRWMvK0JubHNrRk1xc2IrZ1FVdzNBb0dCQUs0SzA3cnFFNHhMQVNGeXhXTG0KNHNpQUlpWjJ5RjhOQUt5SVJ3ajZXUGxsT21DNXFja1dTditVUTl1T2M1QVF3V29JVm1XQ09NVmpiY1l1NEZHQgpCbEtoUkxMOWdYSTNONjUrbUxOY2xEOThoRm5Nd1BMRTVmUkdQWDhJK1lVdEZ2eWYxNmg4RTBYVGU5aU5pNVNKCnRuSEw4Z2dSK2JnVEFvdlRDZ0xjVzMzRkFvR0FSZWFYelM0YTRPb2ovczNhYWl4dGtEMlpPVEdjRUFGM1EySGcKN05LY0VTZ0RhTW1YemNJTzJtVFcxM3pPMmEwRlI3WU0zTko1NnVqRGFNbWg0aExnZFlhTUprZEF3Uit0YlpqYwpKOXdpZ0ZHSGl1VUNhcm5jRXlpL3ZaQ25rVXpFNEFzL3lwUmpQMWdvd05NZHhNWFhMWWRjUlorOGpDNFhabkdNCjB5NkFwWHNDZ1lFQXh6aUkyK2tUekNJcENnOGh3WXdiQ21sTVBaM3RBNXRLRHhKZmNjdWpXSExHVkNnMVd6QTAKdHZuUmxJbnZxdzFXOWtsSGlHTlhmTUpqczhpeXk5WUl4S0NKeTdhUU85WXZ1SVR6OC9PMHVCRURlQ1gvOHFDTwpzRGJ0eHpsa3A2NVdaYTFmR2FLRWVwcHFtWUU2NUdiZk91eHNxRENDSG1WWXcvZmR0M2NnMjI0PQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=
[root@server81 install_k8s_master]#
查看生成的bootstrap.kubeconfig的内容,如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20[root@server81 install_k8s_master]# cat /etc/kubernetes/bootstrap.kubeconfig
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURHVENDQWdHZ0F3SUJBZ0lKQVAxbEpzOTFHbG9wTUEwR0NTcUdTSWIzRFFFQkN3VUFNQ014RXpBUkJnTlYKQkFNTUNtdDFZbVZ5Ym1WMFpYTXhEREFLQmdOVkJBb01BMnM0Y3pBZUZ3MHhPREE0TVRreE5ESXhORFJhRncwMApOakF4TURReE5ESXhORFJhTUNNeEV6QVJCZ05WQkFNTUNtdDFZbVZ5Ym1WMFpYTXhEREFLQmdOVkJBb01BMnM0CmN6Q0NBU0l3RFFZSktvWklodmNOQVFFQkJRQURnZ0VQQURDQ0FRb0NnZ0VCQU5BejJxOUNsVWozZmNTY20wVTYKWnhrTVFCVVJzSFpFeUpIbXhMWUR1RmNzbGlyUjZxZHFSbExjM3Z1SnlVSHB3dUF5QzZxYzlaZE52clNCUkhOegpxUVFSREVuUENMQXQ0ZFVkUjh2NnQvOVhKbnJ0Y0k3My94U0RKNno2eFh3K2MvTy95c0NET3pQNkFDcmE5cHlPCmJpQ1ZRSEJ4eEI3bGxuM0ErUEFaRWEzOHZSNmhTSklzRndxVjAwKy9iNSt5K3FvVVdtNWFtcS83OWNIM2Zwd0kKNnRmUlZIeHAweXBKNi9TckYyZWVWVU1KVlJxZWtiNjBuZkJRUUNEZ2YyL3lSOGNxVDZlV3VDdmZnVEdCV01QSQpPSjVVM1VxekNMVGNpNHpDSFhaTUlra25EWVFuNFR6Qm05MitzTGhXMlpFZk5DOUxycFZYWHpzTm45alFzeTA3ClliOENBd0VBQWFOUU1FNHdIUVlEVlIwT0JCWUVGRWQ0bUxtN292MFdxL2FUTVJLUnlaaVVMOTFNTUI4R0ExVWQKSXdRWU1CYUFGRWQ0bUxtN292MFdxL2FUTVJLUnlaaVVMOTFNTUF3R0ExVWRFd1FGTUFNQkFmOHdEUVlKS29aSQpodmNOQVFFTEJRQURnZ0VCQUtNVGJXcng5WXJmSXByY3RHMThTanJCZHVTYkhLL05FRGcySHNCb1BrU2YwbE1TCmdGTnNzOGZURlliKzY3UWhmTnA1MjBodnk3M3JKU29OVkJweWpBWDR1SnRjVG9aZDdCZVhyUHdNVWVjNXRjQWoKSFdvY1dKaXNpck0vdFV4cUxLekdRdnFhVDhmQy9UUW5kTGUxTkJ0cEFQbjM5RzE5VFVialMvUTlKVE1qZVdMWAo0dU5MVExGUVUrYTAwTWMrMGVSWjdFYUVRSks2U0h1OUNuSEtNZnhIVC81UTdvbXBrZlBtTTZLT0VOVndaK0Q5Clh0ZzlIUmlrampFMGtsNHB3TmlHRnZQYVhuY0V5RDlwVW5vdWI0RGc2UHJ1MU9zTjYxakwyd2VneVY4WU1nUVEKWEdkVTIveExMcEh2cVlPVDNRay9mNWw5MHpackQvYm5vZGhxNS84PQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://172.16.5.81:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubelet-bootstrap
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: kubelet-bootstrap
user:
token: 4b395732894828d5a34737d83c334330
[root@server81 install_k8s_master]#
最后总结一下master部署
检查master组件情况以及集群情况
1
2
3
4
5
6
7
8
9
10
11
12
13 [root@server81 install_k8s_master]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
[root@server81 install_k8s_master]#
[root@server81 install_k8s_master]# kubectl cluster-info
Kubernetes master is running at https://172.16.5.81:6443
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@server81 install_k8s_master]#
确认后续master需要拷贝到node的相关证书文件
因为node
部署的时候,proxy
和kubelet
是需要拷贝上面生成的证书以及kubeconfig
文件的,这里罗列如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21[root@server81 install_k8s_master]# tree /etc/kubernetes/
/etc/kubernetes/
├── apiserver
├── bootstrap.kubeconfig
├── config
├── controller-manager
├── kube-proxy.kubeconfig
├── kubernetesTLS
│ ├── admin.key
│ ├── admin.pem
│ ├── apiserver.key
│ ├── apiserver.pem
│ ├── ca.key
│ ├── ca.pem
│ ├── proxy.key
│ └── proxy.pem
├── scheduler
└── token.csv
1 directory, 15 files
[root@server81 install_k8s_master]#
其中apiserver、controller-manager、scheduler
三个配置文件不需要拷贝到node
节点服务器上,但是个人比较懒惰,干脆整个文件夹目录拷贝过去了。
好了,这里已经写清楚了部署master
以及相关证书需要知道的知识了,那么下一步我们就切换到node
部署的环节。
部署Node节点服务
在部署完毕上面的步骤之后,我们就可以开始部署Node
的节点服务了,在部署之前,首先淡定将master
部署时候创建的TLS
以及相关kubeconfig
文件都拷贝至各台node
节点上。
Node服务器拓扑
因为上面已经写了很多内容了,相信读者还要找拓扑来看比较麻烦,那么就在这里部署Node
服务之前,再次讲述一下。
1.首先我在之前的篇章已经部署好了三台etcd
的集群服务
2.在server81的服务器上我部署好了Master
节点的服务
3.那么下一步就是要给Server81、86、87三台服务器都部署上Node
节点的服务了。
那么下面我们就开始动手部署Node节点的服务吧。
拷贝Master节点创建的TLS以及kubeconfig文件至Node节点服务
因为Server81就是Master
节点服务,所以不需要拷贝证书。
而Server86、87服务器就需要拷贝了,执行命名如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34[root@server81 etc]# scp -r kubernetes root@server86:/etc
ca.pem 100% 1135 243.9KB/s 00:00
ca.key 100% 1679 383.9KB/s 00:00
apiserver.pem 100% 1302 342.6KB/s 00:00
apiserver.key 100% 1675 378.4KB/s 00:00
admin.pem 100% 1050 250.3KB/s 00:00
admin.key 100% 1675 401.5KB/s 00:00
proxy.pem 100% 1009 253.2KB/s 00:00
proxy.key 100% 1679 74.5KB/s 00:00
token.csv 100% 84 4.5KB/s 00:00
config 100% 656 45.9KB/s 00:00
apiserver 100% 1656 484.7KB/s 00:00
controller-manager 100% 615 163.8KB/s 00:00
scheduler 100% 243 10.9KB/s 00:00
kube-proxy.kubeconfig 100% 5451 335.3KB/s 00:00
bootstrap.kubeconfig 100% 1869 468.9KB/s 00:00
[root@server81 etc]#
[root@server81 etc]# scp -r kubernetes root@server87:/etc
ca.pem 100% 1135 373.4KB/s 00:00
ca.key 100% 1679 470.8KB/s 00:00
apiserver.pem 100% 1302 511.5KB/s 00:00
apiserver.key 100% 1675 565.6KB/s 00:00
admin.pem 100% 1050 340.2KB/s 00:00
admin.key 100% 1675 468.4KB/s 00:00
proxy.pem 100% 1009 247.8KB/s 00:00
proxy.key 100% 1679 516.4KB/s 00:00
token.csv 100% 84 30.2KB/s 00:00
config 100% 656 217.0KB/s 00:00
apiserver 100% 1656 415.7KB/s 00:00
controller-manager 100% 615 240.0KB/s 00:00
scheduler 100% 243 92.1KB/s 00:00
kube-proxy.kubeconfig 100% 5451 1.3MB/s 00:00
bootstrap.kubeconfig 100% 1869 614.0KB/s 00:00
[root@server81 etc]#
查看Server86的拷贝文件情况,如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30 [root@server86 etc]# pwd
/etc
[root@server86 etc]#
[root@server86 etc]# tree kubernetes/
kubernetes/
├── apiserver
├── bootstrap.kubeconfig
├── config
├── controller-manager
├── kube-proxy.kubeconfig
├── kubernetesTLS
│ ├── admin.key
│ ├── admin.pem
│ ├── apiserver.key
│ ├── apiserver.pem
│ ├── ca.key
│ ├── ca.pem
│ ├── proxy.key
│ └── proxy.pem
├── scheduler
└── token.csv
1 directory, 15 files
[root@server86 etc]#
[root@server86 etc]# cd kubernetes/
[root@server86 kubernetes]# ls
apiserver bootstrap.kubeconfig config controller-manager kube-proxy.kubeconfig kubernetesTLS scheduler token.csv
[root@server86 kubernetes]# ls kubernetesTLS/
admin.key admin.pem apiserver.key apiserver.pem ca.key ca.pem proxy.key proxy.pem
[root@server86 kubernetes]#
查看Server87的拷贝文件情况,如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30 [root@server87 ~]# cd /etc/
[root@server87 etc]# pwd
/etc
[root@server87 etc]# tree kubernetes/
kubernetes/
├── apiserver
├── bootstrap.kubeconfig
├── config
├── controller-manager
├── kube-proxy.kubeconfig
├── kubernetesTLS
│ ├── admin.key
│ ├── admin.pem
│ ├── apiserver.key
│ ├── apiserver.pem
│ ├── ca.key
│ ├── ca.pem
│ ├── proxy.key
│ └── proxy.pem
├── scheduler
└── token.csv
1 directory, 15 files
[root@server87 etc]# cd kubernetes/
[root@server87 kubernetes]# ls
apiserver bootstrap.kubeconfig config controller-manager kube-proxy.kubeconfig kubernetesTLS scheduler token.csv
[root@server87 kubernetes]#
[root@server87 kubernetes]# ls kubernetesTLS/
admin.key admin.pem apiserver.key apiserver.pem ca.key ca.pem proxy.key proxy.pem
[root@server87 kubernetes]#
拷贝访问etcd集群的TLS证书文件
- 因为每台
Node
都需要访问Etcd集群服务,在后面部署Calico
或者flanneld
网络的时候都是需要证书访问etcd集群的,该部分就会在后面的部署中说明了。 - 但是因为恰好
Server81、86、87
服务器节点,我是用来做etcd
三台服务集群的,在部署的时候已经拷贝好相关证书目录了。 - 可是,如果新增一台服务器想要加入Node的话,这时候该台服务器就需要单独将证书拷贝至相应的文件目录了。
那么这里展示一下etcd集群TLS证书文件应该放在Node节点的哪个目录文件下
其实哪个文件目录在部署etcd集群的时候我有说明过是可以自定义的,不过每个Node文件夹需要相同的服务器路径而已。
Server81存放etcd的TLS文件路径(
/etc/etcd/etcdSSL
)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24 [root@server81 etc]# cd etcd/
[root@server81 etcd]# ls
etcd.conf etcdSSL
[root@server81 etcd]#
[root@server81 etcd]# cd etcdSSL/
[root@server81 etcdSSL]#
[root@server81 etcdSSL]# pwd
/etc/etcd/etcdSSL
[root@server81 etcdSSL]#
[root@server81 etcdSSL]# ls
ca-config.json ca.csr ca-csr.json ca-key.pem ca.pem etcd.csr etcd-csr.json etcd-key.pem etcd.pem
[root@server81 etcdSSL]#
[root@server81 etcdSSL]# ls -ll
total 36
-rw-r--r-- 1 root root 288 Aug 14 14:05 ca-config.json
-rw-r--r-- 1 root root 997 Aug 14 14:05 ca.csr
-rw-r--r-- 1 root root 205 Aug 14 14:05 ca-csr.json
-rw------- 1 root root 1675 Aug 14 14:05 ca-key.pem
-rw-r--r-- 1 root root 1350 Aug 14 14:05 ca.pem
-rw-r--r-- 1 root root 1066 Aug 14 14:05 etcd.csr
-rw-r--r-- 1 root root 296 Aug 14 14:05 etcd-csr.json
-rw------- 1 root root 1675 Aug 14 14:05 etcd-key.pem
-rw-r--r-- 1 root root 1436 Aug 14 14:05 etcd.pem
[root@server81 etcdSSL]#
Server86存在etcd的TLS文件路径(
/etc/etcd/etcdSSL
)
1 | [root@server86 etcd]# cd etcdSSL/ |
Server87存在etcd的TLS文件路径(
/etc/etcd/etcdSSL
)
1 | [root@server87 etcd]# cd etcdSSL/ |
部署Node步骤说明
- 部署docker-ce (如果是直接部署docker的话,那就要启用cgroup参数了,用docker-ce则不需要)
- 部署kubelet服务
- 部署kube-proxy服务
基本上每台Node节点都需要部署这三个服务的,我就单独拿一台Server81部署进行说明先吧。其余Server86、87的部署过程都是跟Server81的Node节点部署一致的。
部署Docker-ce
如果不太懂docker安装的读者,可以访问docker官网的部署文档说明(官网需要翻墙访问比较顺畅)
1.下载docker-ce的rpm包
点击这里,下载docker-ce的rpm安装包。
2.执行安装docker-ce
1 yum install docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm -y
执行安装过程如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33[root@server81 docker]# ls
certs.d docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm docker.service.simple install_docker-ce.sh set_docker_network.sh
daemon.json docker.service erase_docker-ce.sh login_registry.sh test.sh
[root@server81 docker]#
[root@server81 docker]# yum install docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm -y
Loaded plugins: fastestmirror
Examining docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm: docker-ce-18.03.0.ce-1.el7.centos.x86_64
Marking docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 0:18.03.0.ce-1.el7.centos will be installed
--> Processing Dependency: container-selinux >= 2.9 for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
Loading mirror speeds from cached hostfile
......
Installed:
docker-ce.x86_64 0:18.03.0.ce-1.el7.centos
Dependency Installed:
audit-libs-python.x86_64 0:2.8.1-3.el7 checkpolicy.x86_64 0:2.5-6.el7 container-selinux.noarch 2:2.66-1.el7
libcgroup.x86_64 0:0.41-15.el7 libseccomp.x86_64 0:2.3.1-3.el7 libsemanage-python.x86_64 0:2.5-11.el7
pigz.x86_64 0:2.3.4-1.el7 policycoreutils-python.x86_64 0:2.5-22.el7 python-IPy.noarch 0:0.75-6.el7
setools-libs.x86_64 0:3.3.8-2.el7
Dependency Updated:
audit.x86_64 0:2.8.1-3.el7 audit-libs.x86_64 0:2.8.1-3.el7 libselinux.x86_64 0:2.5-12.el7
libselinux-python.x86_64 0:2.5-12.el7 libselinux-utils.x86_64 0:2.5-12.el7 libsemanage.x86_64 0:2.5-11.el7
libsepol.x86_64 0:2.5-8.1.el7 policycoreutils.x86_64 0:2.5-22.el7 selinux-policy.noarch 0:3.13.1-192.el7_5.4
selinux-policy-targeted.noarch 0:3.13.1-192.el7_5.4
Complete!
[root@server81 docker]#
3.启用docker-ce
1 | systemctl daemon-reload |
执行如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48[root@server81 install_k8s_node]# systemctl daemon-reload
[root@server81 install_k8s_node]# systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
[root@server81 install_k8s_node]# systemctl restart docker
[root@server81 install_k8s_node]# systemctl status docker
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 14:11:17 HKT; 639ms ago
Docs: https://docs.docker.com
Main PID: 3014 (dockerd)
Memory: 36.4M
CGroup: /system.slice/docker.service
├─3014 /usr/bin/dockerd
└─3021 docker-containerd --config /var/run/docker/containerd/containerd.toml
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17+08:00" level=info msg=serving... address="/var/run/docker/c...d/grpc"
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17+08:00" level=info msg="containerd successfully booted in 0....tainerd
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.492174891+08:00" level=info msg="Graph migration to content...econds"
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.493087053+08:00" level=info msg="Loading containers: start."
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.608563905+08:00" level=info msg="Default bridge (docker0) i...ddress"
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.645395453+08:00" level=info msg="Loading containers: done."
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.659457843+08:00" level=info msg="Docker daemon" commit=0520...03.0-ce
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.659619134+08:00" level=info msg="Daemon has completed initialization"
Aug 20 14:11:17 server81 dockerd[3014]: time="2018-08-20T14:11:17.669961967+08:00" level=info msg="API listen on /var/run/docker.sock"
Aug 20 14:11:17 server81 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 install_k8s_node]#
[root@server81 install_k8s_node]# docker version
Client:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:09:15 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Server:
Engine:
Version: 18.03.0-ce
API version: 1.37 (minimum version 1.12)
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:13:03 2018
OS/Arch: linux/amd64
Experimental: false
[root@server81 install_k8s_node]#
拷贝二进制可执行文件至Node服务器(
/usr/bin
)
1 | #!/bin/bash |
执行结果如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18[root@server81 install_k8s_node]# ./Step1_config.sh
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
SELinux status: disabled
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kubectl’ -> ‘/usr/bin/kubectl’
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kubelet’ -> ‘/usr/bin/kubelet’
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kube-proxy’ -> ‘/usr/bin/kube-proxy’
[root@server81 install_k8s_node]#
[root@server81 install_k8s_node]# ls -ll /usr/bin/kube*
-rwxr-xr-x 1 root root 185471375 Aug 19 22:57 /usr/bin/kube-apiserver
-rwxr-xr-x 1 root root 154056749 Aug 19 22:57 /usr/bin/kube-controller-manager
-rwxr-xr-x 1 root root 55421261 Aug 20 14:14 /usr/bin/kubectl
-rwxr-xr-x 1 root root 162998216 Aug 20 14:14 /usr/bin/kubelet
-rwxr-xr-x 1 root root 52055519 Aug 20 14:14 /usr/bin/kube-proxy
-rwxr-xr-x 1 root root 55610654 Aug 19 22:57 /usr/bin/kube-scheduler
[root@server81 install_k8s_node]#
首先关闭每台Node
服务器的swap分区、防火墙、selinux
,然后将二进制可执行文件
拷贝至/usr/bin
目录下。
那么下面开始部署Node
节点的kubelet
和kube-proxy
服务。
部署kubelet服务
1.编写kubelet.service文件(/usr/lib/systemd/system)
编写kubelet.service
写入/usr/lib/systemd/system
目录下:
1 | [root@server81 install_k8s_node]# cat /usr/lib/systemd/system/kubelet.service |
kubelet.service参数说明
1
2 EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/kubelet
说明:配置kubelet
启用读取的两个配置文件config、kubelet
,其中config
在部署master
服务的时候已经写好了,这是一个通用的配置文件。那么下面则单独编写kubelet
的配置文件。
1 | ExecStart=/usr/bin/kubelet \ |
说明:定义service启用的时候运行的二进制可执行文件(/usr/bin/kubelet
)以及相关服务启动所需的参数(这些参数从配置文件中读取
)。
配置文件kubelet(/etc/kubernetes)
编写kubelet
配置文件至/etc/kubernetes/
目录下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27[root@server81 install_k8s_node]# cat /etc/kubernetes/
apiserver config kubelet kubernetesTLS/ token.csv
bootstrap.kubeconfig controller-manager kube-proxy.kubeconfig scheduler
[root@server81 install_k8s_node]# cat /etc/kubernetes/kubelet
###
## kubernetes kubelet (minion) config
#
## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)
#KUBELET_ADDRESS="--address=0.0.0.0"
#
## The port for the info server to serve on
#KUBELET_PORT="--port=10250"
#
## You may leave this blank to use the actual hostname
KUBELET_HOSTNAME="--hostname-override=172.16.5.81"
#
## location of the api-server
KUBELET_CONFIG="--kubeconfig=/etc/kubernetes/kubelet.kubeconfig"
#
## pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=172.16.5.81:5000/pause-amd64:3.1"
#
## Add your own!
KUBELET_ARGS="--cluster-dns=10.0.6.200 --serialize-image-pulls=false --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/kubernetesTLS --cluster-domain=cluster.local. --hairpin-mode promiscuous-bridge --network-plugin=cni"
[root@server81 install_k8s_node]#
kubelet配置文件中的相关参数说明
1 | ## You may leave this blank to use the actual hostname |
说明:这里是写Node
节点的名称,我使用该服务器的IP地址进行覆盖。如果是在Server87、Server86
上部署,则修改相应的IP地址即可。
在部署完毕之后,执行kubectl get node
,你就可以看到你定义的node
节点名称的了。
1 | ## location of the api-server |
说明:定义kubelet
的kubeconfig
文件路径,之前在master
部署的时候创建的。
1 | ## pod infrastructure container |
说明:
- 在创建应用的时候,
kubelet
是需要依赖于pause
镜像的,如果没有pause
镜像,那么镜像就会启用失败。 - 所以每个
Node
节点上必须要有pause
的镜像,但是默认pause
镜像需要翻墙后再去官网下载的,这样会影响镜像启动的效率,那么我就将pause
镜像下载到我的私有仓库中,方便内网启动。
这里pause镜像的私有地址:172.16.5.81:5000/pause-amd64:3.1
对于读者可以从以下地址地址
pause
镜像,然后再搭设一个自己的私有仓库。下载地址如下:(该仓库是另一位博客作者提供的,在此感谢他)该作者写的kuberntes
部署是没有启用RBAC
模式的,是极简模式,有兴趣的读者也可以去看看。1
2docker pull mirrorgooglecontainers/pause-amd64:3.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1如果你可以翻墙,直接下载官网的镜像地址即可
1
docker pull k8s.gcr.io/pause-amd64:3.1
1 | ## Add your own! |
参数 | 说明 |
---|---|
--cluster-dns=10.0.6.200 |
设置kubernetes 集群网络中内部DNS 的IP 地址,后续用于CoreDNS |
--serialize-image-pulls=false |
设置kubernetes 集群允许使用http 非安全镜像拉取 |
--bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig |
设置bootstrap.kubeconfig 的文件路径 |
--cert-dir=/etc/kubernetes/kubernetesTLS |
设置kubernetes 的TLS 文件路径,后续kubelet 服务启动之后,会在该文件夹自动创建kubelet 相关公钥和私钥文件 |
--cluster-domain=cluster.local. |
设置kubernetes 集群的DNS 域名 |
--hairpin-mode promiscuous-bridge |
设置pod 桥接网络模式 |
--network-plugin=cni |
设置启用CNI 网络插件,因为后续是使用Calico 网络,所以需要配置 |
如果你还想更加详细了解kubelet的参数配置,可以访问官网,点击这里。
启动kubelet服务
1
2
3
4 systemctl daemon-reload
systemctl enable kubelet
systemctl start kubelet
systemctl status kubelet
执行运行如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48[root@server81 kubernetesTLS]# ls -ll
total 32
-rw-r--r-- 1 root root 1675 Aug 19 22:21 admin.key
-rw-r--r-- 1 root root 1050 Aug 19 22:21 admin.pem
-rw-r--r-- 1 root root 1675 Aug 19 22:21 apiserver.key
-rw-r--r-- 1 root root 1302 Aug 19 22:21 apiserver.pem
-rw-r--r-- 1 root root 1679 Aug 19 22:21 ca.key
-rw-r--r-- 1 root root 1135 Aug 19 22:21 ca.pem
-rw-r--r-- 1 root root 1679 Aug 19 22:21 proxy.key
-rw-r--r-- 1 root root 1009 Aug 19 22:21 proxy.pem
[root@server81 kubernetesTLS]#
[root@server81 kubernetesTLS]# systemctl daemon-reload
[root@server81 kubernetesTLS]# systemctl enable kubelet
[root@server81 kubernetesTLS]# systemctl start kubelet
[root@server81 kubernetesTLS]# systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 15:07:26 HKT; 640ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 3589 (kubelet)
Memory: 16.1M
CGroup: /system.slice/kubelet.service
└─3589 /usr/bin/kubelet --logtostderr=true --v=0 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --hostname-override=172.16.5.81 --pod-infra-container-image=172.16.5.81:5000/...
Aug 20 15:07:26 server81 systemd[1]: Started Kubernetes Kubelet Server.
Aug 20 15:07:26 server81 systemd[1]: Starting Kubernetes Kubelet Server...
Aug 20 15:07:26 server81 kubelet[3589]: Flag --cluster-dns has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. Se...information.
Aug 20 15:07:26 server81 kubelet[3589]: Flag --serialize-image-pulls has been deprecated, This parameter should be set via the config file specified by the Kubelet's --confi...information.
Aug 20 15:07:26 server81 kubelet[3589]: Flag --cluster-domain has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag....information.
Aug 20 15:07:26 server81 kubelet[3589]: Flag --hairpin-mode has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. S...information.
Aug 20 15:07:26 server81 kubelet[3589]: I0820 15:07:26.364083 3589 feature_gate.go:230] feature gates: &{map[]}
Aug 20 15:07:26 server81 kubelet[3589]: I0820 15:07:26.364224 3589 feature_gate.go:230] feature gates: &{map[]}
Hint: Some lines were ellipsized, use -l to show in full.
[root@server81 kubernetesTLS]#
[root@server81 kubernetesTLS]# ls -ll
total 44
-rw-r--r-- 1 root root 1675 Aug 19 22:21 admin.key
-rw-r--r-- 1 root root 1050 Aug 19 22:21 admin.pem
-rw-r--r-- 1 root root 1675 Aug 19 22:21 apiserver.key
-rw-r--r-- 1 root root 1302 Aug 19 22:21 apiserver.pem
-rw-r--r-- 1 root root 1679 Aug 19 22:21 ca.key
-rw-r--r-- 1 root root 1135 Aug 19 22:21 ca.pem
-rw------- 1 root root 227 Aug 20 15:07 kubelet-client.key.tmp
-rw-r--r-- 1 root root 2177 Aug 20 15:07 kubelet.crt
-rw------- 1 root root 1679 Aug 20 15:07 kubelet.key
-rw-r--r-- 1 root root 1679 Aug 19 22:21 proxy.key
-rw-r--r-- 1 root root 1009 Aug 19 22:21 proxy.pem
[root@server81 kubernetesTLS]#
注意:
- 可以从文件夹中看出,
kubelet
服务启动之后,自动响应生成了这三个文件:kubelet-client.key.tmp kubelet.crt kubelet.key
。 - 如果需要重新部署
kubelet
服务,那么就需要删除这三个文件即可。不然会提示过期,服务启动异常。 - 另外,可以看到
kubelet-client.key.tmp
该文件还没有亮色,不可以运行起来,原因是kubelet
向apiserver
发出CSR
认证的请求,此时apiserver
还没有认证通过。 - 那么下一步就需要回到
master
服务认证csr
。
在master节点服务器认证通过csr
master认证通过csr脚本如下:1
2
3
4
5
6
7
8
9
10
11#!/bin/bash
basedir=$(cd `dirname $0`;pwd)
## function
function node_approve_csr(){
CSR=`kubectl get csr | grep csr | grep Pending | awk '{print $1}' | head -n 1`
kubectl certificate approve $CSR
kubectl get nodes
}
node_approve_csr
执行通过csr过程如下:
1 | [root@server81 kubernetesTLS]# ls |
说明:
- 可以看到执行
kubectl certificate approve node-csr-fH4Ct4Fg4TgzFV0dP-SlfVCtTo9XNCJjajzPohDVxHE
之后, - 再次执行
kubectl get csr
查看csr
的时候,在node-csr
的状态就变成了Approved,Issued
了, - 此时
kubectl get node
的时候就可以看到node
节点了,只是状态为NotReady
而已 - 另外,查看
TLS
文件夹,可以看到kubelet-client.key.tmp
该临时文件在csr
通过之后,变成了文件如下:1
2-rw------- 1 root root 1183 Aug 20 15:14 kubelet-client-2018-08-20-15-14-35.pem
lrwxrwxrwx 1 root root 68 Aug 20 15:14 kubelet-client-current.pem -> /etc/kubernetes/kubernetesTLS/kubelet-client-2018-08-20-15-14-35.pem
最后查看一下kubelet启动后的日志:
1
2
3
4
5
6
7
8
9
10
11
12
13
14 [root@server81 install_k8s_node]# journalctl -f -u kubelet
-- Logs begin at Sun 2018-08-19 21:26:42 HKT. --
Aug 20 15:20:51 server81 kubelet[3589]: W0820 15:20:51.476453 3589 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 15:20:51 server81 kubelet[3589]: E0820 15:20:51.477201 3589 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 20 15:20:56 server81 kubelet[3589]: W0820 15:20:56.479691 3589 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 15:20:56 server81 kubelet[3589]: E0820 15:20:56.480061 3589 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 20 15:21:01 server81 kubelet[3589]: W0820 15:21:01.483272 3589 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 15:21:01 server81 kubelet[3589]: E0820 15:21:01.484824 3589 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 20 15:21:06 server81 kubelet[3589]: W0820 15:21:06.488203 3589 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 15:21:06 server81 kubelet[3589]: E0820 15:21:06.489788 3589 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 20 15:21:11 server81 kubelet[3589]: W0820 15:21:11.497281 3589 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 15:21:11 server81 kubelet[3589]: E0820 15:21:11.497941 3589 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Aug 20 15:21:16 server81 kubelet[3589]: W0820 15:21:16.502290 3589 cni.go:172] Unable to update cni config: No networks found in /etc/cni/net.d
Aug 20 15:21:16 server81 kubelet[3589]: E0820 15:21:16.502733 3589 kubelet.go:2112] Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
说明:此时日志提示没有cni
网络,这个后续在安装Calico
网络的时候说明。
部署kube-proxy服务
编写kube-proxy.service文件(
/usr/lib/systemd/system
)
1 | [root@server81 install_k8s_node]# cat /usr/lib/systemd/system/kube-proxy.service |
kube-proxy.service 说明:
1
2 EnvironmentFile=-/etc/kubernetes/config
EnvironmentFile=-/etc/kubernetes/proxy
配置kube-proxy
启用读取的两个配置文件config、proxy
,其中config
在部署master
服务的时候已经写好了,这是一个通用的配置文件。那么下面则单独编写proxy
的配置文件。
1 | ExecStart=/usr/bin/kube-proxy \ |
配置kube-proxy服务启动使用的二进制可执行文件的路径(/usr/bin/kube-proxy
)以及相关启动参数
配置文件proxy(
/etc/kubernetes
)
1 | [root@server81 install_k8s_node]# cat /etc/kubernetes/ |
参数说明:
--kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig
指定proxy
运行的kubeconfig
文件路径--cluster-cidr=10.1.0.0/16
指定pod
在kubernetes
启动的虚拟IP网段(CNI网络),提供后续calico
使用参数
启动kube-proxy服务
1
2
3
4 systemctl daemon-reload
systemctl enable kube-proxy
systemctl start kube-proxy
systemctl status kube-proxy
执行如下:1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22[root@server81 install_k8s_node]# systemctl daemon-reload
[root@server81 install_k8s_node]# systemctl enable kube-proxy
[root@server81 install_k8s_node]# systemctl start kube-proxy
[root@server81 install_k8s_node]# systemctl status kube-proxy
● kube-proxy.service - Kube Proxy Service
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 15:32:10 HKT; 11min ago
Main PID: 3988 (kube-proxy)
CGroup: /system.slice/kube-proxy.service
‣ 3988 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://172.16.5.81:8080 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.1.0.0/16
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.742562 3988 conntrack.go:52] Setting nf_conntrack_max to 131072
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.748678 3988 conntrack.go:83] Setting conntrack hashsize to 32768
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.749216 3988 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.749266 3988 conntrack.go:98] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.749762 3988 config.go:102] Starting endpoints config controller
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.749807 3988 controller_utils.go:1025] Waiting for caches to sync for endpoints config controller
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.749838 3988 config.go:202] Starting service config controller
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.749845 3988 controller_utils.go:1025] Waiting for caches to sync for service config controller
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.850911 3988 controller_utils.go:1032] Caches are synced for endpoints config controller
Aug 20 15:32:10 server81 kube-proxy[3988]: I0820 15:32:10.850959 3988 controller_utils.go:1032] Caches are synced for service config controller
[root@server81 install_k8s_node]#
执行到这里,关于node的服务也已经部署好了,而其他Server86和87的服务,我这边使用脚本快速部署一下,执行过程于Server81一致。
使用脚本快速部署Server86服务器
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271 [root@server86 kubernetesTLS]# cd /opt/
[root@server86 opt]# ls
install_etcd_cluster install_kubernetes rh
[root@server86 opt]#
[root@server86 opt]#
[root@server86 opt]# cd install_kubernetes/
[root@server86 install_kubernetes]# ls
check_etcd install_Calico install_CoreDNS install_k8s_master install_k8s_node install_kubernetes_software install_RAS_node MASTER_INFO reademe.txt
[root@server86 install_kubernetes]#
[root@server86 install_kubernetes]# cd install_k8s_node/
[root@server86 install_k8s_node]# ls
nodefile Step1_config.sh Step2_install_docker.sh Step3_install_kubelet.sh Step4_install_proxy.sh Step5_node_approve_csr.sh Step6_master_node_context.sh
[root@server86 install_k8s_node]#
[root@server86 install_k8s_node]# ./Step1_config.sh
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
SELinux status: disabled
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kubectl’ -> ‘/usr/bin/kubectl’
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kubelet’ -> ‘/usr/bin/kubelet’
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kube-proxy’ -> ‘/usr/bin/kube-proxy’
[root@server86 install_k8s_node]#
[root@server86 install_k8s_node]# ./Step2_install_docker.sh
Loaded plugins: fastestmirror, langpacks
Examining /opt/install_kubernetes/install_k8s_node/nodefile/docker/docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm: docker-ce-18.03.0.ce-1.el7.centos.x86_64
Marking /opt/install_kubernetes/install_k8s_node/nodefile/docker/docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 0:18.03.0.ce-1.el7.centos will be installed
--> Processing Dependency: container-selinux >= 2.9 for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* epel: mirrors.tongji.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.163.com
--> Processing Dependency: pigz for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.66-1.el7 will be installed
--> Processing Dependency: selinux-policy-targeted >= 3.13.1-192 for package: 2:container-selinux-2.66-1.el7.noarch
--> Processing Dependency: selinux-policy-base >= 3.13.1-192 for package: 2:container-selinux-2.66-1.el7.noarch
--> Processing Dependency: selinux-policy >= 3.13.1-192 for package: 2:container-selinux-2.66-1.el7.noarch
---> Package pigz.x86_64 0:2.3.4-1.el7 will be installed
--> Running transaction check
---> Package selinux-policy.noarch 0:3.13.1-166.el7_4.5 will be updated
---> Package selinux-policy.noarch 0:3.13.1-192.el7_5.4 will be an update
--> Processing Dependency: policycoreutils >= 2.5-18 for package: selinux-policy-3.13.1-192.el7_5.4.noarch
---> Package selinux-policy-targeted.noarch 0:3.13.1-166.el7_4.5 will be updated
---> Package selinux-policy-targeted.noarch 0:3.13.1-192.el7_5.4 will be an update
--> Running transaction check
---> Package policycoreutils.x86_64 0:2.5-17.1.el7 will be updated
--> Processing Dependency: policycoreutils = 2.5-17.1.el7 for package: policycoreutils-python-2.5-17.1.el7.x86_64
---> Package policycoreutils.x86_64 0:2.5-22.el7 will be an update
--> Processing Dependency: libsepol >= 2.5-8 for package: policycoreutils-2.5-22.el7.x86_64
--> Processing Dependency: libselinux-utils >= 2.5-12 for package: policycoreutils-2.5-22.el7.x86_64
--> Running transaction check
---> Package libselinux-utils.x86_64 0:2.5-11.el7 will be updated
---> Package libselinux-utils.x86_64 0:2.5-12.el7 will be an update
--> Processing Dependency: libselinux(x86-64) = 2.5-12.el7 for package: libselinux-utils-2.5-12.el7.x86_64
---> Package libsepol.i686 0:2.5-6.el7 will be updated
---> Package libsepol.x86_64 0:2.5-6.el7 will be updated
---> Package libsepol.i686 0:2.5-8.1.el7 will be an update
---> Package libsepol.x86_64 0:2.5-8.1.el7 will be an update
---> Package policycoreutils-python.x86_64 0:2.5-17.1.el7 will be updated
---> Package policycoreutils-python.x86_64 0:2.5-22.el7 will be an update
--> Processing Dependency: setools-libs >= 3.3.8-2 for package: policycoreutils-python-2.5-22.el7.x86_64
--> Processing Dependency: libsemanage-python >= 2.5-9 for package: policycoreutils-python-2.5-22.el7.x86_64
--> Running transaction check
---> Package libselinux.i686 0:2.5-11.el7 will be updated
---> Package libselinux.x86_64 0:2.5-11.el7 will be updated
--> Processing Dependency: libselinux(x86-64) = 2.5-11.el7 for package: libselinux-python-2.5-11.el7.x86_64
---> Package libselinux.i686 0:2.5-12.el7 will be an update
---> Package libselinux.x86_64 0:2.5-12.el7 will be an update
---> Package libsemanage-python.x86_64 0:2.5-8.el7 will be updated
---> Package libsemanage-python.x86_64 0:2.5-11.el7 will be an update
--> Processing Dependency: libsemanage = 2.5-11.el7 for package: libsemanage-python-2.5-11.el7.x86_64
---> Package setools-libs.x86_64 0:3.3.8-1.1.el7 will be updated
---> Package setools-libs.x86_64 0:3.3.8-2.el7 will be an update
--> Running transaction check
---> Package libselinux-python.x86_64 0:2.5-11.el7 will be updated
---> Package libselinux-python.x86_64 0:2.5-12.el7 will be an update
---> Package libsemanage.x86_64 0:2.5-8.el7 will be updated
---> Package libsemanage.x86_64 0:2.5-11.el7 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
===========================================================================================================================================================================
Package Arch Version Repository Size
===========================================================================================================================================================================
Installing:
docker-ce x86_64 18.03.0.ce-1.el7.centos /docker-ce-18.03.0.ce-1.el7.centos.x86_64 151 M
Installing for dependencies:
container-selinux noarch 2:2.66-1.el7 extras 35 k
pigz x86_64 2.3.4-1.el7 epel 81 k
Updating for dependencies:
libselinux i686 2.5-12.el7 base 166 k
libselinux x86_64 2.5-12.el7 base 162 k
libselinux-python x86_64 2.5-12.el7 base 235 k
libselinux-utils x86_64 2.5-12.el7 base 151 k
libsemanage x86_64 2.5-11.el7 base 150 k
libsemanage-python x86_64 2.5-11.el7 base 112 k
libsepol i686 2.5-8.1.el7 base 293 k
libsepol x86_64 2.5-8.1.el7 base 297 k
policycoreutils x86_64 2.5-22.el7 base 867 k
policycoreutils-python x86_64 2.5-22.el7 base 454 k
selinux-policy noarch 3.13.1-192.el7_5.4 updates 453 k
selinux-policy-targeted noarch 3.13.1-192.el7_5.4 updates 6.6 M
setools-libs x86_64 3.3.8-2.el7 base 619 k
Transaction Summary
===========================================================================================================================================================================
Install 1 Package (+ 2 Dependent packages)
Upgrade ( 13 Dependent packages)
Total size: 161 M
Total download size: 11 M
Downloading packages:
No Presto metadata available for base
updates/7/x86_64/prestodelta | 420 kB 00:00:00
(1/15): container-selinux-2.66-1.el7.noarch.rpm | 35 kB 00:00:00
(2/15): libselinux-2.5-12.el7.i686.rpm | 166 kB 00:00:00
(3/15): libsemanage-2.5-11.el7.x86_64.rpm | 150 kB 00:00:00
(4/15): libsemanage-python-2.5-11.el7.x86_64.rpm | 112 kB 00:00:00
(5/15): libselinux-utils-2.5-12.el7.x86_64.rpm | 151 kB 00:00:00
(6/15): libselinux-2.5-12.el7.x86_64.rpm | 162 kB 00:00:00
(7/15): libsepol-2.5-8.1.el7.i686.rpm | 293 kB 00:00:00
(8/15): libsepol-2.5-8.1.el7.x86_64.rpm | 297 kB 00:00:00
(9/15): selinux-policy-3.13.1-192.el7_5.4.noarch.rpm | 453 kB 00:00:00
(10/15): policycoreutils-2.5-22.el7.x86_64.rpm | 867 kB 00:00:00
(11/15): selinux-policy-targeted-3.13.1-192.el7_5.4.noarch.rpm | 6.6 MB 00:00:00
(12/15): policycoreutils-python-2.5-22.el7.x86_64.rpm | 454 kB 00:00:01
(13/15): setools-libs-3.3.8-2.el7.x86_64.rpm | 619 kB 00:00:00
(14/15): pigz-2.3.4-1.el7.x86_64.rpm | 81 kB 00:00:01
(15/15): libselinux-python-2.5-12.el7.x86_64.rpm | 235 kB 00:00:01
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 4.7 MB/s | 11 MB 00:00:02
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : libsepol-2.5-8.1.el7.x86_64 1/29
Updating : libselinux-2.5-12.el7.x86_64 2/29
Updating : libsemanage-2.5-11.el7.x86_64 3/29
Updating : libselinux-utils-2.5-12.el7.x86_64 4/29
Updating : policycoreutils-2.5-22.el7.x86_64 5/29
Updating : selinux-policy-3.13.1-192.el7_5.4.noarch 6/29
Updating : selinux-policy-targeted-3.13.1-192.el7_5.4.noarch 7/29
Updating : libsemanage-python-2.5-11.el7.x86_64 8/29
Updating : libselinux-python-2.5-12.el7.x86_64 9/29
Updating : setools-libs-3.3.8-2.el7.x86_64 10/29
Updating : policycoreutils-python-2.5-22.el7.x86_64 11/29
Installing : 2:container-selinux-2.66-1.el7.noarch 12/29
setsebool: SELinux is disabled.
Installing : pigz-2.3.4-1.el7.x86_64 13/29
Updating : libsepol-2.5-8.1.el7.i686 14/29
Installing : docker-ce-18.03.0.ce-1.el7.centos.x86_64 15/29
Updating : libselinux-2.5-12.el7.i686 16/29
Cleanup : selinux-policy-targeted-3.13.1-166.el7_4.5.noarch 17/29
Cleanup : policycoreutils-python-2.5-17.1.el7.x86_64 18/29
Cleanup : selinux-policy-3.13.1-166.el7_4.5.noarch 19/29
Cleanup : libselinux-2.5-11.el7 20/29
Cleanup : policycoreutils-2.5-17.1.el7.x86_64 21/29
Cleanup : libselinux-utils-2.5-11.el7.x86_64 22/29
Cleanup : setools-libs-3.3.8-1.1.el7.x86_64 23/29
Cleanup : libselinux-python-2.5-11.el7.x86_64 24/29
Cleanup : libsemanage-python-2.5-8.el7.x86_64 25/29
Cleanup : libsepol-2.5-6.el7 26/29
Cleanup : libsemanage-2.5-8.el7.x86_64 27/29
Cleanup : libselinux-2.5-11.el7 28/29
Cleanup : libsepol-2.5-6.el7 29/29
Verifying : libselinux-python-2.5-12.el7.x86_64 1/29
Verifying : selinux-policy-3.13.1-192.el7_5.4.noarch 2/29
Verifying : setools-libs-3.3.8-2.el7.x86_64 3/29
Verifying : libsemanage-python-2.5-11.el7.x86_64 4/29
Verifying : policycoreutils-2.5-22.el7.x86_64 5/29
Verifying : libsepol-2.5-8.1.el7.i686 6/29
Verifying : libsemanage-2.5-11.el7.x86_64 7/29
Verifying : selinux-policy-targeted-3.13.1-192.el7_5.4.noarch 8/29
Verifying : pigz-2.3.4-1.el7.x86_64 9/29
Verifying : policycoreutils-python-2.5-22.el7.x86_64 10/29
Verifying : 2:container-selinux-2.66-1.el7.noarch 11/29
Verifying : libselinux-2.5-12.el7.i686 12/29
Verifying : libsepol-2.5-8.1.el7.x86_64 13/29
Verifying : libselinux-2.5-12.el7.x86_64 14/29
Verifying : docker-ce-18.03.0.ce-1.el7.centos.x86_64 15/29
Verifying : libselinux-utils-2.5-12.el7.x86_64 16/29
Verifying : libselinux-utils-2.5-11.el7.x86_64 17/29
Verifying : libsepol-2.5-6.el7.i686 18/29
Verifying : libselinux-2.5-11.el7.x86_64 19/29
Verifying : libsepol-2.5-6.el7.x86_64 20/29
Verifying : policycoreutils-python-2.5-17.1.el7.x86_64 21/29
Verifying : selinux-policy-targeted-3.13.1-166.el7_4.5.noarch 22/29
Verifying : policycoreutils-2.5-17.1.el7.x86_64 23/29
Verifying : libsemanage-python-2.5-8.el7.x86_64 24/29
Verifying : libselinux-2.5-11.el7.i686 25/29
Verifying : libsemanage-2.5-8.el7.x86_64 26/29
Verifying : selinux-policy-3.13.1-166.el7_4.5.noarch 27/29
Verifying : libselinux-python-2.5-11.el7.x86_64 28/29
Verifying : setools-libs-3.3.8-1.1.el7.x86_64 29/29
Installed:
docker-ce.x86_64 0:18.03.0.ce-1.el7.centos
Dependency Installed:
container-selinux.noarch 2:2.66-1.el7 pigz.x86_64 0:2.3.4-1.el7
Dependency Updated:
libselinux.i686 0:2.5-12.el7 libselinux.x86_64 0:2.5-12.el7 libselinux-python.x86_64 0:2.5-12.el7
libselinux-utils.x86_64 0:2.5-12.el7 libsemanage.x86_64 0:2.5-11.el7 libsemanage-python.x86_64 0:2.5-11.el7
libsepol.i686 0:2.5-8.1.el7 libsepol.x86_64 0:2.5-8.1.el7 policycoreutils.x86_64 0:2.5-22.el7
policycoreutils-python.x86_64 0:2.5-22.el7 selinux-policy.noarch 0:3.13.1-192.el7_5.4 selinux-policy-targeted.noarch 0:3.13.1-192.el7_5.4
setools-libs.x86_64 0:3.3.8-2.el7
Complete!
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 15:57:07 HKT; 21ms ago
Docs: https://docs.docker.com
Main PID: 2955 (dockerd)
Memory: 39.0M
CGroup: /system.slice/docker.service
├─2955 /usr/bin/dockerd
└─2964 docker-containerd --config /var/run/docker/containerd/containerd.toml
Aug 20 15:57:06 server86 dockerd[2955]: time="2018-08-20T15:57:06.737217664+08:00" level=info msg="devmapper: Creating filesystem xfs on device docker-8:3-67...8927-base]"
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.045640563+08:00" level=info msg="devmapper: Successfully created filesystem xfs on device d...18927-base"
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.257682803+08:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.260865731+08:00" level=info msg="Loading containers: start."
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.603658334+08:00" level=info msg="Default bridge (docker0) is assigned with an IP address 17...IP address"
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.763307367+08:00" level=info msg="Loading containers: done."
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.812802202+08:00" level=info msg="Docker daemon" commit=0520e24 graphdriver(s)=devicemapper ...=18.03.0-ce
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.813732684+08:00" level=info msg="Daemon has completed initialization"
Aug 20 15:57:07 server86 dockerd[2955]: time="2018-08-20T15:57:07.866979598+08:00" level=info msg="API listen on /var/run/docker.sock"
Aug 20 15:57:07 server86 systemd[1]: Started Docker Application Container Engine.
Hint: Some lines were ellipsized, use -l to show in full.
[root@server86 install_k8s_node]#
[root@server86 install_k8s_node]# ls
nodefile Step1_config.sh Step2_install_docker.sh Step3_install_kubelet.sh Step4_install_proxy.sh Step5_node_approve_csr.sh Step6_master_node_context.sh
[root@server86 install_k8s_node]#
[root@server86 install_k8s_node]# ./Step3_install_kubelet.sh
MASTER_IP=172.16.5.81
cat: /opt/ETCD_CLUSER_INFO: No such file or directory
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 15:57:15 HKT; 142ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 3195 (kubelet)
Memory: 5.8M
CGroup: /system.slice/kubelet.service
└─3195 /usr/bin/kubelet --logtostderr=true --v=0 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --hostname-override=172.16.5.86 --pod-infra-container-image=...
Aug 20 15:57:15 server86 systemd[1]: Started Kubernetes Kubelet Server.
Aug 20 15:57:15 server86 systemd[1]: Starting Kubernetes Kubelet Server...
[root@server86 install_k8s_node]#
[root@server86 install_k8s_node]# ./Step4_install_proxy.sh
Created symlink from /etc/systemd/system/default.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
● kube-proxy.service - Kube Proxy Service
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 15:57:19 HKT; 97ms ago
Main PID: 3282 (kube-proxy)
Memory: 5.5M
CGroup: /system.slice/kube-proxy.service
└─3282 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://172.16.5.81:8080 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.1.0...
Aug 20 15:57:19 server86 systemd[1]: Started Kube Proxy Service.
Aug 20 15:57:19 server86 systemd[1]: Starting Kube Proxy Service...
[root@server86 install_k8s_node]#
[root@server86 install_k8s_node]#
使用脚本快速部署Server87服务器
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315 [root@server87 ~]# cd /opt/
[root@server87 opt]# ls
install_etcd_cluster install_kubernetes rh
[root@server87 opt]# cd install_kubernetes/
[root@server87 install_kubernetes]# ls
check_etcd install_Calico install_CoreDNS install_k8s_master install_k8s_node install_kubernetes_software install_RAS_node MASTER_INFO reademe.txt
[root@server87 install_kubernetes]# cd install_k8s_node/
[root@server87 install_k8s_node]# ls
nodefile Step1_config.sh Step2_install_docker.sh Step3_install_kubelet.sh Step4_install_proxy.sh Step5_node_approve_csr.sh Step6_master_node_context.sh
[root@server87 install_k8s_node]# ./Step1_config.sh
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
SELinux status: disabled
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kubectl’ -> ‘/usr/bin/kubectl’
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kubelet’ -> ‘/usr/bin/kubelet’
‘/opt/install_kubernetes/install_k8s_node/../install_kubernetes_software/kube-proxy’ -> ‘/usr/bin/kube-proxy’
[root@server87 install_k8s_node]# ./Step2_install_docker.sh
Loaded plugins: fastestmirror, langpacks
Examining /opt/install_kubernetes/install_k8s_node/nodefile/docker/docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm: docker-ce-18.03.0.ce-1.el7.centos.x86_64
Marking /opt/install_kubernetes/install_k8s_node/nodefile/docker/docker-ce-18.03.0.ce-1.el7.centos.x86_64.rpm to be installed
Resolving Dependencies
--> Running transaction check
---> Package docker-ce.x86_64 0:18.03.0.ce-1.el7.centos will be installed
--> Processing Dependency: container-selinux >= 2.9 for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* epel: mirrors.tongji.edu.cn
* extras: mirrors.aliyun.com
* updates: mirrors.163.com
--> Processing Dependency: libseccomp >= 2.3 for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
--> Processing Dependency: pigz for package: docker-ce-18.03.0.ce-1.el7.centos.x86_64
--> Running transaction check
---> Package container-selinux.noarch 2:2.66-1.el7 will be installed
--> Processing Dependency: selinux-policy-targeted >= 3.13.1-192 for package: 2:container-selinux-2.66-1.el7.noarch
--> Processing Dependency: selinux-policy-base >= 3.13.1-192 for package: 2:container-selinux-2.66-1.el7.noarch
--> Processing Dependency: selinux-policy >= 3.13.1-192 for package: 2:container-selinux-2.66-1.el7.noarch
--> Processing Dependency: policycoreutils >= 2.5-11 for package: 2:container-selinux-2.66-1.el7.noarch
---> Package libseccomp.x86_64 0:2.2.1-1.el7 will be updated
---> Package libseccomp.x86_64 0:2.3.1-3.el7 will be an update
---> Package pigz.x86_64 0:2.3.4-1.el7 will be installed
--> Running transaction check
---> Package policycoreutils.x86_64 0:2.2.5-20.el7 will be updated
--> Processing Dependency: policycoreutils = 2.2.5-20.el7 for package: policycoreutils-python-2.2.5-20.el7.x86_64
---> Package policycoreutils.x86_64 0:2.5-22.el7 will be an update
--> Processing Dependency: libsepol >= 2.5-8 for package: policycoreutils-2.5-22.el7.x86_64
--> Processing Dependency: libselinux-utils >= 2.5-12 for package: policycoreutils-2.5-22.el7.x86_64
--> Processing Dependency: libsepol.so.1(LIBSEPOL_1.1)(64bit) for package: policycoreutils-2.5-22.el7.x86_64
--> Processing Dependency: libsepol.so.1(LIBSEPOL_1.0)(64bit) for package: policycoreutils-2.5-22.el7.x86_64
--> Processing Dependency: libsemanage.so.1(LIBSEMANAGE_1.1)(64bit) for package: policycoreutils-2.5-22.el7.x86_64
---> Package selinux-policy.noarch 0:3.13.1-60.el7 will be updated
---> Package selinux-policy.noarch 0:3.13.1-192.el7_5.4 will be an update
---> Package selinux-policy-targeted.noarch 0:3.13.1-60.el7 will be updated
---> Package selinux-policy-targeted.noarch 0:3.13.1-192.el7_5.4 will be an update
--> Running transaction check
---> Package libselinux-utils.x86_64 0:2.2.2-6.el7 will be updated
---> Package libselinux-utils.x86_64 0:2.5-12.el7 will be an update
--> Processing Dependency: libselinux(x86-64) = 2.5-12.el7 for package: libselinux-utils-2.5-12.el7.x86_64
---> Package libsemanage.x86_64 0:2.1.10-18.el7 will be updated
--> Processing Dependency: libsemanage = 2.1.10-18.el7 for package: libsemanage-python-2.1.10-18.el7.x86_64
---> Package libsemanage.x86_64 0:2.5-11.el7 will be an update
---> Package libsepol.x86_64 0:2.1.9-3.el7 will be updated
---> Package libsepol.x86_64 0:2.5-8.1.el7 will be an update
---> Package policycoreutils-python.x86_64 0:2.2.5-20.el7 will be updated
---> Package policycoreutils-python.x86_64 0:2.5-22.el7 will be an update
--> Processing Dependency: setools-libs >= 3.3.8-2 for package: policycoreutils-python-2.5-22.el7.x86_64
--> Running transaction check
---> Package libselinux.x86_64 0:2.2.2-6.el7 will be updated
--> Processing Dependency: libselinux = 2.2.2-6.el7 for package: libselinux-python-2.2.2-6.el7.x86_64
---> Package libselinux.x86_64 0:2.5-12.el7 will be an update
---> Package libsemanage-python.x86_64 0:2.1.10-18.el7 will be updated
---> Package libsemanage-python.x86_64 0:2.5-11.el7 will be an update
---> Package setools-libs.x86_64 0:3.3.7-46.el7 will be updated
---> Package setools-libs.x86_64 0:3.3.8-2.el7 will be an update
--> Running transaction check
---> Package libselinux-python.x86_64 0:2.2.2-6.el7 will be updated
---> Package libselinux-python.x86_64 0:2.5-12.el7 will be an update
--> Processing Conflict: libselinux-2.5-12.el7.x86_64 conflicts systemd < 219-20
--> Restarting Dependency Resolution with new changes.
--> Running transaction check
---> Package systemd.x86_64 0:219-19.el7 will be updated
--> Processing Dependency: systemd = 219-19.el7 for package: systemd-python-219-19.el7.x86_64
--> Processing Dependency: systemd = 219-19.el7 for package: systemd-sysv-219-19.el7.x86_64
---> Package systemd.x86_64 0:219-57.el7 will be an update
--> Processing Dependency: systemd-libs = 219-57.el7 for package: systemd-219-57.el7.x86_64
--> Processing Dependency: liblz4.so.1()(64bit) for package: systemd-219-57.el7.x86_64
--> Running transaction check
---> Package lz4.x86_64 0:1.7.5-2.el7 will be installed
---> Package systemd-libs.x86_64 0:219-19.el7 will be updated
--> Processing Dependency: systemd-libs = 219-19.el7 for package: libgudev1-219-19.el7.x86_64
---> Package systemd-libs.x86_64 0:219-57.el7 will be an update
---> Package systemd-python.x86_64 0:219-19.el7 will be updated
---> Package systemd-python.x86_64 0:219-57.el7 will be an update
---> Package systemd-sysv.x86_64 0:219-19.el7 will be updated
---> Package systemd-sysv.x86_64 0:219-57.el7 will be an update
--> Running transaction check
---> Package libgudev1.x86_64 0:219-19.el7 will be updated
---> Package libgudev1.x86_64 0:219-57.el7 will be an update
--> Finished Dependency Resolution
Dependencies Resolved
===========================================================================================================================================================================
Package Arch Version Repository Size
===========================================================================================================================================================================
Installing:
docker-ce x86_64 18.03.0.ce-1.el7.centos /docker-ce-18.03.0.ce-1.el7.centos.x86_64 151 M
Updating:
systemd x86_64 219-57.el7 base 5.0 M
Installing for dependencies:
container-selinux noarch 2:2.66-1.el7 extras 35 k
lz4 x86_64 1.7.5-2.el7 base 98 k
pigz x86_64 2.3.4-1.el7 epel 81 k
Updating for dependencies:
libgudev1 x86_64 219-57.el7 base 92 k
libseccomp x86_64 2.3.1-3.el7 base 56 k
libselinux x86_64 2.5-12.el7 base 162 k
libselinux-python x86_64 2.5-12.el7 base 235 k
libselinux-utils x86_64 2.5-12.el7 base 151 k
libsemanage x86_64 2.5-11.el7 base 150 k
libsemanage-python x86_64 2.5-11.el7 base 112 k
libsepol x86_64 2.5-8.1.el7 base 297 k
policycoreutils x86_64 2.5-22.el7 base 867 k
policycoreutils-python x86_64 2.5-22.el7 base 454 k
selinux-policy noarch 3.13.1-192.el7_5.4 updates 453 k
selinux-policy-targeted noarch 3.13.1-192.el7_5.4 updates 6.6 M
setools-libs x86_64 3.3.8-2.el7 base 619 k
systemd-libs x86_64 219-57.el7 base 402 k
systemd-python x86_64 219-57.el7 base 128 k
systemd-sysv x86_64 219-57.el7 base 79 k
Transaction Summary
===========================================================================================================================================================================
Install 1 Package (+ 3 Dependent packages)
Upgrade 1 Package (+16 Dependent packages)
Total size: 166 M
Total download size: 16 M
Downloading packages:
No Presto metadata available for base
updates/7/x86_64/prestodelta | 420 kB 00:00:01
(1/19): libselinux-2.5-12.el7.x86_64.rpm | 162 kB 00:00:00
(2/19): libselinux-utils-2.5-12.el7.x86_64.rpm | 151 kB 00:00:00
(3/19): libsemanage-2.5-11.el7.x86_64.rpm | 150 kB 00:00:00
(4/19): libgudev1-219-57.el7.x86_64.rpm | 92 kB 00:00:00
(5/19): libsemanage-python-2.5-11.el7.x86_64.rpm | 112 kB 00:00:00
(6/19): libsepol-2.5-8.1.el7.x86_64.rpm | 297 kB 00:00:00
(7/19): lz4-1.7.5-2.el7.x86_64.rpm | 98 kB 00:00:00
(8/19): libselinux-python-2.5-12.el7.x86_64.rpm | 235 kB 00:00:00
(9/19): selinux-policy-3.13.1-192.el7_5.4.noarch.rpm | 453 kB 00:00:00
(10/19): policycoreutils-python-2.5-22.el7.x86_64.rpm | 454 kB 00:00:00
(11/19): setools-libs-3.3.8-2.el7.x86_64.rpm | 619 kB 00:00:00
(12/19): systemd-219-57.el7.x86_64.rpm | 5.0 MB 00:00:00
(13/19): container-selinux-2.66-1.el7.noarch.rpm | 35 kB 00:00:01
(14/19): systemd-libs-219-57.el7.x86_64.rpm | 402 kB 00:00:00
(15/19): systemd-sysv-219-57.el7.x86_64.rpm | 79 kB 00:00:00
(16/19): selinux-policy-targeted-3.13.1-192.el7_5.4.noarch.rpm | 6.6 MB 00:00:01
(17/19): systemd-python-219-57.el7.x86_64.rpm | 128 kB 00:00:00
(18/19): pigz-2.3.4-1.el7.x86_64.rpm | 81 kB 00:00:01
(19/19): policycoreutils-2.5-22.el7.x86_64.rpm | 867 kB 00:00:01
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total 7.4 MB/s | 16 MB 00:00:02
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Updating : libsepol-2.5-8.1.el7.x86_64 1/38
Updating : libselinux-2.5-12.el7.x86_64 2/38
Updating : libsemanage-2.5-11.el7.x86_64 3/38
Installing : lz4-1.7.5-2.el7.x86_64 4/38
Updating : systemd-libs-219-57.el7.x86_64 5/38
Updating : systemd-219-57.el7.x86_64 6/38
Updating : libselinux-utils-2.5-12.el7.x86_64 7/38
Updating : policycoreutils-2.5-22.el7.x86_64 8/38
Updating : selinux-policy-3.13.1-192.el7_5.4.noarch 9/38
Updating : selinux-policy-targeted-3.13.1-192.el7_5.4.noarch 10/38
Updating : libsemanage-python-2.5-11.el7.x86_64 11/38
Updating : libselinux-python-2.5-12.el7.x86_64 12/38
Updating : setools-libs-3.3.8-2.el7.x86_64 13/38
Updating : policycoreutils-python-2.5-22.el7.x86_64 14/38
Installing : 2:container-selinux-2.66-1.el7.noarch 15/38
setsebool: SELinux is disabled.
Installing : pigz-2.3.4-1.el7.x86_64 16/38
Updating : libseccomp-2.3.1-3.el7.x86_64 17/38
Installing : docker-ce-18.03.0.ce-1.el7.centos.x86_64 18/38
Updating : systemd-sysv-219-57.el7.x86_64 19/38
Updating : systemd-python-219-57.el7.x86_64 20/38
Updating : libgudev1-219-57.el7.x86_64 21/38
Cleanup : selinux-policy-targeted-3.13.1-60.el7.noarch 22/38
Cleanup : policycoreutils-python-2.2.5-20.el7.x86_64 23/38
Cleanup : selinux-policy-3.13.1-60.el7.noarch 24/38
Cleanup : systemd-sysv-219-19.el7.x86_64 25/38
Cleanup : policycoreutils-2.2.5-20.el7.x86_64 26/38
Cleanup : systemd-python-219-19.el7.x86_64 27/38
Cleanup : systemd-219-19.el7.x86_64 28/38
Cleanup : setools-libs-3.3.7-46.el7.x86_64 29/38
Cleanup : libselinux-utils-2.2.2-6.el7.x86_64 30/38
Cleanup : libselinux-python-2.2.2-6.el7.x86_64 31/38
Cleanup : libsemanage-python-2.1.10-18.el7.x86_64 32/38
Cleanup : libsemanage-2.1.10-18.el7.x86_64 33/38
Cleanup : libgudev1-219-19.el7.x86_64 34/38
Cleanup : systemd-libs-219-19.el7.x86_64 35/38
Cleanup : libselinux-2.2.2-6.el7.x86_64 36/38
Cleanup : libsepol-2.1.9-3.el7.x86_64 37/38
Cleanup : libseccomp-2.2.1-1.el7.x86_64 38/38
Verifying : libsemanage-python-2.5-11.el7.x86_64 1/38
Verifying : libsemanage-2.5-11.el7.x86_64 2/38
Verifying : libselinux-python-2.5-12.el7.x86_64 3/38
Verifying : selinux-policy-3.13.1-192.el7_5.4.noarch 4/38
Verifying : setools-libs-3.3.8-2.el7.x86_64 5/38
Verifying : libseccomp-2.3.1-3.el7.x86_64 6/38
Verifying : policycoreutils-2.5-22.el7.x86_64 7/38
Verifying : selinux-policy-targeted-3.13.1-192.el7_5.4.noarch 8/38
Verifying : pigz-2.3.4-1.el7.x86_64 9/38
Verifying : policycoreutils-python-2.5-22.el7.x86_64 10/38
Verifying : libgudev1-219-57.el7.x86_64 11/38
Verifying : 2:container-selinux-2.66-1.el7.noarch 12/38
Verifying : systemd-sysv-219-57.el7.x86_64 13/38
Verifying : lz4-1.7.5-2.el7.x86_64 14/38
Verifying : systemd-219-57.el7.x86_64 15/38
Verifying : libsepol-2.5-8.1.el7.x86_64 16/38
Verifying : systemd-libs-219-57.el7.x86_64 17/38
Verifying : libselinux-2.5-12.el7.x86_64 18/38
Verifying : docker-ce-18.03.0.ce-1.el7.centos.x86_64 19/38
Verifying : libselinux-utils-2.5-12.el7.x86_64 20/38
Verifying : systemd-python-219-57.el7.x86_64 21/38
Verifying : libsemanage-python-2.1.10-18.el7.x86_64 22/38
Verifying : selinux-policy-targeted-3.13.1-60.el7.noarch 23/38
Verifying : setools-libs-3.3.7-46.el7.x86_64 24/38
Verifying : libsemanage-2.1.10-18.el7.x86_64 25/38
Verifying : systemd-sysv-219-19.el7.x86_64 26/38
Verifying : libgudev1-219-19.el7.x86_64 27/38
Verifying : systemd-219-19.el7.x86_64 28/38
Verifying : selinux-policy-3.13.1-60.el7.noarch 29/38
Verifying : systemd-libs-219-19.el7.x86_64 30/38
Verifying : libselinux-utils-2.2.2-6.el7.x86_64 31/38
Verifying : libseccomp-2.2.1-1.el7.x86_64 32/38
Verifying : libsepol-2.1.9-3.el7.x86_64 33/38
Verifying : libselinux-python-2.2.2-6.el7.x86_64 34/38
Verifying : policycoreutils-2.2.5-20.el7.x86_64 35/38
Verifying : systemd-python-219-19.el7.x86_64 36/38
Verifying : libselinux-2.2.2-6.el7.x86_64 37/38
Verifying : policycoreutils-python-2.2.5-20.el7.x86_64 38/38
Installed:
docker-ce.x86_64 0:18.03.0.ce-1.el7.centos
Dependency Installed:
container-selinux.noarch 2:2.66-1.el7 lz4.x86_64 0:1.7.5-2.el7 pigz.x86_64 0:2.3.4-1.el7
Updated:
systemd.x86_64 0:219-57.el7
Dependency Updated:
libgudev1.x86_64 0:219-57.el7 libseccomp.x86_64 0:2.3.1-3.el7 libselinux.x86_64 0:2.5-12.el7
libselinux-python.x86_64 0:2.5-12.el7 libselinux-utils.x86_64 0:2.5-12.el7 libsemanage.x86_64 0:2.5-11.el7
libsemanage-python.x86_64 0:2.5-11.el7 libsepol.x86_64 0:2.5-8.1.el7 policycoreutils.x86_64 0:2.5-22.el7
policycoreutils-python.x86_64 0:2.5-22.el7 selinux-policy.noarch 0:3.13.1-192.el7_5.4 selinux-policy-targeted.noarch 0:3.13.1-192.el7_5.4
setools-libs.x86_64 0:3.3.8-2.el7 systemd-libs.x86_64 0:219-57.el7 systemd-python.x86_64 0:219-57.el7
systemd-sysv.x86_64 0:219-57.el7
Complete!
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
● docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 15:51:50 HKT; 9ms ago
Docs: https://docs.docker.com
Main PID: 42077 (dockerd)
Memory: 40.8M
CGroup: /system.slice/docker.service
├─42077 /usr/bin/dockerd
└─42086 docker-containerd --config /var/run/docker/containerd/containerd.toml
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.337814778+08:00" level=info msg="devmapper: Successfully created filesystem xfs on device d...5123-base"
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.463516508+08:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.463782799+08:00" level=warning msg="mountpoint for pids not found"
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.464461343+08:00" level=info msg="Loading containers: start."
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.601643093+08:00" level=info msg="Default bridge (docker0) is assigned with an IP address 17...P address"
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.677859724+08:00" level=info msg="Loading containers: done."
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.696315433+08:00" level=info msg="Docker daemon" commit=0520e24 graphdriver(s)=devicemapper ...18.03.0-ce
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.696473183+08:00" level=info msg="Daemon has completed initialization"
Aug 20 15:51:50 server87 systemd[1]: Started Docker Application Container Engine.
Aug 20 15:51:50 server87 dockerd[42077]: time="2018-08-20T15:51:50.714102886+08:00" level=info msg="API listen on /var/run/docker.sock"
Hint: Some lines were ellipsized, use -l to show in full.
[root@server87 install_k8s_node]# ls
nodefile Step1_config.sh Step2_install_docker.sh Step3_install_kubelet.sh Step4_install_proxy.sh Step5_node_approve_csr.sh Step6_master_node_context.sh
[root@server87 install_k8s_node]# ./Step3_install_kubelet.sh
MASTER_IP=172.16.5.81
cat: /opt/ETCD_CLUSER_INFO: No such file or directory
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
● kubelet.service - Kubernetes Kubelet Server
Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 15:52:13 HKT; 46ms ago
Docs: https://github.com/GoogleCloudPlatform/kubernetes
Main PID: 42486 (kubelet)
Memory: 6.4M
CGroup: /system.slice/kubelet.service
└─42486 /usr/bin/kubelet --logtostderr=true --v=0 --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --hostname-override=172.16.5.87 --pod-infra-container-image...
Aug 20 15:52:13 server87 systemd[1]: Started Kubernetes Kubelet Server.
Aug 20 15:52:13 server87 systemd[1]: Starting Kubernetes Kubelet Server...
[root@server87 install_k8s_node]#
[root@server87 install_k8s_node]# ./Step4_install_proxy.sh
Created symlink from /etc/systemd/system/default.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
● kube-proxy.service - Kube Proxy Service
Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)
Active: active (running) since Mon 2018-08-20 15:52:18 HKT; 38ms ago
Main PID: 42814 (kube-proxy)
Memory: 5.8M
CGroup: /system.slice/kube-proxy.service
└─42814 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://172.16.5.81:8080 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.1....
Aug 20 15:52:18 server87 systemd[1]: Started Kube Proxy Service.
Aug 20 15:52:18 server87 systemd[1]: Starting Kube Proxy Service...
[root@server87 install_k8s_node]#
回到Master服务器认证通过Server86、87的kubelet服务csr请求
1 | [root@server81 opt]# kubectl get csr |
部署到这里kubernetes
的Node
节点服务也部署完毕了,虽然这里是NotReady
状态,但是只要部署Calico
网络即可。
最后总结
综上所述,整体kubernetes
启用RBAC
的生成环境 二进制可执行文件的环境已部署完毕。
这里Node
节点部署Calico
网络的内容我就打算写在下一篇章了。
优化的方向
- 离线环境部署kubernetes环境
- 全自动部署项目
- 服务器集群外部组件的说明以及自动化部署
以上几点后续,有时间我可以陆续逐步写上来的,赞一下给我点动力吧。
如果你想要看我写的总体系列文章目录介绍,可以点击kuberntes以及运维开发文章目录介绍