K8s新版本搭建
文档参考:多节点安装 (kubesphere.com.cn) (opens new window)
# 1、准备环境
# 确保三台机能ping通
ping 10.0.2.23
ping 10.0.2.24
ping 10.0.2.25
1
2
3
2
3
# 确保三个节点都能相互访问
ssh 10.0.2.23
ssh 10.0.2.24
ssh 10.0.2.25
1
2
3
2
3
# 确保三个节点都能ping通外网
ping www.qq.com
1
# 安装工具
# docker这里就不展开说明
docker -v
# socat
yum -y install socat
# conntrack
yum -y install conntrack
1
2
3
4
5
6
7
8
2
3
4
5
6
7
8
# 时间同步
# 时间同步
timedatectl
1
2
2
发现时间跟我们的不一样
# 切换到上海
timedatectl set-timezone Asia/Shanghai
1
2
2
再次查看,发现本地时间已经切换到中国时间,并且三台机器的时间都是一样的。
yum -y install chrony
1
安装完后,查看一下时间同步服务器列表
chronyc -n sources -v
1
查看本机时间统统同步状态
chronyc tracking
1
核查本地时间
timedatectl status
1
可以看到三个机器是一样的了,如果要用到集群中某个机器为准点为时间机器,需要修改配置
# 2、下载pubkey
三台机器都能相互ping通后,接下来在master操作
curl -sfL https://get-kk.kubesphere.io | VERSION=v1.1.1 sh -
1
返回:
Downloading kubekey v1.1.1 from https://github.com/kubesphere/kubekey/releases/download/v1.1.1/kubekey-v1.1.1-linux-amd64.tar.gz ...
Kubekey v1.1.1 Download Complete!
为 kk
添加可执行权限:
chmod +x kk
1
# 3、创建集群
# 3.1、创建示例配件文件
./kk create config --with-kubernetes v1.19.8 --with-kubesphere v3.1.1
1
会在当前文件夹生成config.sample.yaml
# 3.2、配置文件
vi config-sample.yaml
1
apiVersion: kubekey.kubesphere.io/v1alpha1
kind: Cluster
metadata:
name: sample
spec:
hosts:
- {name: k8s-node1, address: 10.0.2.4, internalAddress: 192.168.56.101, user: root, password: vagrant}
- {name: k8s-node2, address: 10.0.2.5, internalAddress: 192.168.56.102, user: root, password: vagrant}
- {name: k8s-node3, address: 10.0.2.6, internalAddress: 192.168.56.103, user: root, password: vagrant}
roleGroups:
etcd:
- k8s-node1
master:
- k8s-node1
worker:
- k8s-node1
- k8s-node2
- k8s-node3
controlPlaneEndpoint:
domain: lb.kubesphere.local
address: ""
port: 6443
kubernetes:
version: v1.19.8
imageRepo: kubesphere
clusterName: cluster.local
network:
plugin: calico
kubePodsCIDR: 10.233.64.0/18
kubeServiceCIDR: 10.233.0.0/18
registry:
registryMirrors: []
insecureRegistries: []
addons: []
---
apiVersion: installer.kubesphere.io/v1alpha1
kind: ClusterConfiguration
metadata:
name: ks-installer
namespace: kubesphere-system
labels:
version: v3.1.1
spec:
persistence:
storageClass: ""
authentication:
jwtSecret: ""
zone: ""
local_registry: ""
etcd:
monitoring: false
endpointIps: localhost
port: 2379
tlsEnable: true
common:
redis:
enabled: false
redisVolumSize: 2Gi
openldap:
enabled: false
openldapVolumeSize: 2Gi
minioVolumeSize: 20Gi
monitoring:
endpoint: http://prometheus-operated.kubesphere-monitoring-system.svc:9090
es:
elasticsearchMasterVolumeSize: 4Gi
elasticsearchDataVolumeSize: 20Gi
logMaxAge: 7
elkPrefix: logstash
basicAuth:
enabled: false
username: ""
password: ""
externalElasticsearchUrl: ""
externalElasticsearchPort: ""
console:
enableMultiLogin: true
port: 30880
alerting:
enabled: true
# thanosruler:
# replicas: 1
# resources: {}
auditing:
enabled: false
devops:
enabled: true
jenkinsMemoryLim: 2Gi
jenkinsMemoryReq: 1500Mi
jenkinsVolumeSize: 8Gi
jenkinsJavaOpts_Xms: 512m
jenkinsJavaOpts_Xmx: 512m
jenkinsJavaOpts_MaxRAM: 2g
events:
enabled: false
ruler:
enabled: true
replicas: 2
logging:
enabled: false
logsidecar:
enabled: true
replicas: 2
metrics_server:
enabled: false
monitoring:
storageClass: ""
prometheusMemoryRequest: 400Mi
prometheusVolumeSize: 20Gi
multicluster:
clusterRole: none
network:
networkpolicy:
enabled: false
ippool:
type: none
topology:
type: none
openpitrix:
store:
enabled: true
servicemesh:
enabled: false
kubeedge:
enabled: false
cloudCore:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
cloudhubPort: "10000"
cloudhubQuicPort: "10001"
cloudhubHttpsPort: "10002"
cloudstreamPort: "10003"
tunnelPort: "10004"
cloudHub:
advertiseAddress:
- ""
nodeLimit: "100"
service:
cloudhubNodePort: "30000"
cloudhubQuicNodePort: "30001"
cloudhubHttpsNodePort: "30002"
cloudstreamNodePort: "30003"
tunnelNodePort: "30004"
edgeWatcher:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
edgeWatcherAgent:
nodeSelector: {"node-role.kubernetes.io/worker": ""}
tolerations: []
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
# 3.3、开始安装
./kk create cluster -f config-sample.yaml
1
然后弹出输入yes
,接下来耐心等待就OK。
整个安装过程可能需要 10 到 20 分钟,具体取决于您的计算机和网络环境。
# 4、验证安装
安装完成后,您会看到如下内容:
#####################################################
### Welcome to KubeSphere! ###
#####################################################
Console: http://192.168.0.2:30880
Account: admin
Password: P@88w0rd
NOTES:
1. After you log into the console, please check the
monitoring status of service components in
the "Cluster Management". If any service is not
ready, please wait patiently until all components
are up and running.
2. Please change the default password after login.
#####################################################
https://kubesphere.io 20xx-xx-xx xx:xx:xx
#####################################################
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
现在,您可以通过 <NodeIP:30880
使用默认帐户和密码 (admin/P@88w0rd
) 访问 KubeSphere 的 Web 控制台。
帮我改善此页面 (opens new window)
上次更新: 2021/09/08, 06:15:12