诺普培训原创文章《Ubuntu部署kubernetes》
实验环境:Desktop :客户端 ,Storage# student , student#utility 工具Dockerfile # 192.168.19.254# 网关此虚拟机一定要开机 。
Master: K8S管理端 # 192.168.19.100
Node1: K8S节点1 #192.168.19.101
Node2: K8S节点2 #192.168.19.102
1.在所有节点master节点上部署k8s master
安装工具包:
student@master:~$ sudo apt-get update && sudo apt-get install -y ca-certificates curl software-properties-common apt-transport-https curl
添加阿里apt-key
student@master:~$ curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
OK
student@master:~$
配置使用阿里kubernetes源:
sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
student@master:~$ sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
student@master:~$
更新软件源:
student@master:~$ sudo apt-get update
Hit:1 http://mirrors.aliyun.com/ubuntu focal InRelease
Hit:2 http://mirrors.aliyun.com/ubuntu focal-security InRelease
Hit:3 http://mirrors.aliyun.com/ubuntu focal-updates InRelease
Get:4 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial InRelease
Hit:5 http://mirrors.aliyun.com/ubuntu focal-proposed InRelease
Hit:6 http://mirrors.aliyun.com/ubuntu focal-backports InRelease
Ign:7 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
Get:7 https://mirrors.aliyun.com/kubernetes/apt kubernetes-xenial/main amd64 Packages
Fetched 47.8 kB in 2s (26.6 kB/s)
Reading package lists... Done
student@master:~$
master,node1,node2上安装kubelet, kubeadm, kubectl
student@master:~$ sudo apt-get install -y kubelet kubeadm kubectl
初始化master:
Sudo kubeadm init --kubernetes-version=1.18.2 \
--apiserver-advertise-address=192.168.19.100 \
--image-repository registry.aliyuncs.com/google_containers \
--service-cidr=10.1.0.0/16 \
--pod-network-cidr=10.244.0.0/16
参数:
--kubernetes-version=1.18.2 :指定安装1.18.2最新1.18.2
--apiserver-advertise-address=192.168.19.100# master
--image-repository registry.aliyuncs.com/google_containers# 安装时下载 阿里的镜像
--service-cidr=10.1.0.0/16#svc的 网络VIP # kubeproxy
--pod-network-cidr=10.244.0.0/16#pod网段
student@master:~$ sudo kubeadm init --kubernetes-version=1.18.2 \
> --apiserver-advertise-address=192.168.19.100 \
> --image-repository registry.aliyuncs.com/google_containers \
> --service-cidr=10.1.0.0/16 \
> --pod-network-cidr=10.244.0.0/16
W0812 20:07:16.089865 3731 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups
Using Kubernetes version: v1.18.2
Running pre-flight checks
: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
Pulling images required for setting up a Kubernetes cluster
This might take a minute or two, depending on the speed of your internet connection
You can also perform this action in beforehand using 'kubeadm config images pull'
Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
Starting the kubelet
Using certificateDir folder "/etc/kubernetes/pki"
Generating "ca" certificate and key
Generating "apiserver" certificate and key
apiserver serving cert is signed for DNS names [master kubernetes kubernetes.defa
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f .yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.19.100:6443 --token la82lq.j25bot0eopia3knp \
--discovery-token-ca-cert-hash sha256:221335f0da68ce2395509d37f1abf5805b73a999ff4233f6c49d633aeb8fd63d
student@master:~$
student@master:~$ sudo docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
39d01acb2e6f bc9c328f379c "/usr/local/bin/kube…" 5 minutes ago Up 5 minutes k8s_kube-proxy_kube-proxy-64fb4_kube-system_2f7fe8d4-cbf8-423b-bc89-ade5014a310b_0
0eddb2c25fa9 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 5 minutes ago Up 5 minutes k8s_POD_kube-proxy-64fb4_kube-system_2f7fe8d4-cbf8-423b-bc89-ade5014a310b_0
81387b2c2f34 d4ca8726196c "etcd --advertise-cl…" 6 minutes ago Up 6 minutes k8s_etcd_etcd-master_kube-system_754d7b14ce170dd6f2ea9b723326e8c5_0
d61b96c18a3e cbdc8369d8b1 "kube-scheduler --au…" 6 minutes ago Up 6 minutes k8s_kube-scheduler_kube-scheduler-master_kube-system_670a3f9629c937daf0c4a0b80213c1f8_0
596ae33f57ea 09d665d529d0 "kube-controller-man…" 6 minutes ago Up 6 minutes k8s_kube-controller-manager_kube-controller-manager-master_kube-system_bf4923690b64f1f087e9dea15973941f_0
b6d5b5959251 1b74e93ece2f "kube-apiserver --ad…" 6 minutes ago Up 6 minutes k8s_kube-apiserver_kube-apiserver-master_kube-system_fa44272de38ca0bd51456a31b1356cbe_0
c21595505346 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_kube-scheduler-master_kube-system_670a3f9629c937daf0c4a0b80213c1f8_0
a3d04c1d5df5 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_kube-controller-manager-master_kube-system_bf4923690b64f1f087e9dea15973941f_0
705e270d20a7 registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_kube-apiserver-master_kube-system_fa44272de38ca0bd51456a31b1356cbe_0
38817872b00b registry.aliyuncs.com/google_containers/pause:3.2 "/pause" 6 minutes ago Up 6 minutes k8s_POD_etcd-master_kube-system_754d7b14ce170dd6f2ea9b723326e8c5_0
student@master:~$
kubelet:# 管理docker服务 ,k8s中使用kubelet 管理docker api在master ,node节点上 都要安装
kubectl: 管理命令行工具
master: 单节点
kubelet:# 管理docker服务 ,k8s中使用kubelet 管理docker api在master ,node节点上 都要安装# 本地的服务 操作系统管理 静态POD
所有节点都要运行。
api-server: api服务器
scheduler:调度器
etcd: 数据库,保存所有数据分存式数据
kubeproxy: haproxy 实现 VIP ,svc(service) ---->podip流量分发
kubectl :管理命令行工具
node:
kubelet: 管理docker服务 ,k8s中使用kubelet 管理docker api# 本地的服务 操作系统管理
kubeproxy: haproxy 实现 VIP ,svc(service) ---->podip流量分发
kubectl :管理命令行工具
配置身份认证:
student@master:~$ mkdir -p $HOME/.kube
student@master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
student@master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
student@master:~$
安装calico网络插件支持网络策略: flannel 不建议使用 #不支持网络策略
student@master:~$ wget https://docs.projectcalico.org/v3.11/manifests/calico.yaml
--2020-08-12 20:11:31--https://docs.projectcalico.org/v3.11/manifests/calico.yaml
Resolving docs.projectcalico.org (docs.projectcalico.org)... 178.128.17.49, 157.230.35.153, 2400:6180:0:d1::575:a001, ...
Connecting to docs.projectcalico.org (docs.projectcalico.org)|178.128.17.49|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 20847 (20K)
Saving to: ‘calico.yaml’
calico.yaml 100%[================================================================>]20.36K5.07KB/s in 4.0s
2020-08-12 20:11:44 (5.07 KB/s) - ‘calico.yaml’ saved
student@master:~$
修改yaml文 件:
修改CALICO_IPV4POOL_CIDR为10.244.0.0/16
部署calico网络插件:
student@master:~$ kubectl apply -f calico.yaml
查看所有docker容器:
查看node 节点:
网络组件:
student@master:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
calico-kube-controllers-6b8f6f78dc-52p2s 1/1 Running 0 8m39s #网络组件管理端 只运行在master节点上。
calico-node-6266x 1/1 Running 0 8m39s #网络节点
calico-node-cxsdq 1/1 Running 0 8m39s#网络节点
calico-node-xkv57 1/1 Running 0 8m39s#网络节点
coredns-6d56c8448f-5zb5l 1/1 Running 0 3h18m
coredns-6d56c8448f-bmwxb 1/1 Running 0 3h18m
etcd-master 1/1 Running 0 3h18m
kube-apiserver-master 1/1 Running 0 3h18m
kube-controller-manager-master 1/1 Running 0 3h18m
kube-proxy-64fb4 1/1 Running 0 3h18m
kube-proxy-ckg72 1/1 Running 0 167m
kube-proxy-cxftl 1/1 Running 0 166m
kube-scheduler-master 1/1 Running 0 3h18m
student@master:~$
student@master:~$ sudo systemctl enable kubelet
查看pod所运行的节点 (物理机)
student@master:~$ kubectl get pods -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
calico-kube-controllers-6b8f6f78dc-52p2s 1/1 Running 0 11m 10.244.166.129 node1 <none> <none>
calico-node-6266x 1/1 Running 0 11m 192.168.19.100 master <none> <none>
calico-node-cxsdq 1/1 Running 0 11m 192.168.19.101 node1 <none> <none>
calico-node-xkv57 1/1 Running 0 11m 192.168.19.102 node2 <none> <none>
coredns-6d56c8448f-5zb5l 1/1 Running 0 3h20m 10.244.166.130 node1 <none> <none>
coredns-6d56c8448f-bmwxb 1/1 Running 0 3h20m 10.244.219.65 master <none> <none>
etcd-master 1/1 Running 0 3h20m 192.168.19.100 master <none> <none>
kube-apiserver-master 1/1 Running 0 3h20m 192.168.19.100 master <none> <none>
kube-controller-manager-master 1/1 Running 0 3h20m 192.168.19.100 master <none> <none>
kube-proxy-64fb4 1/1 Running 0 3h20m 192.168.19.100 master <none> <none>
kube-proxy-ckg72 1/1 Running 0 170m 192.168.19.101 node1 <none> <none>
kube-proxy-cxftl 1/1 Running 0 168m 192.168.19.102 node2 <none> <none>
kube-scheduler-master 1/1 Running 0 3h20m 192.168.19.100 master <none> <none>
student@master:~$
2.节点node1,node2上安装kubelet, kubeadm, kubectl:
student@node1:~$ sudo apt-get update && sudo apt-get install -y ca-certificates curl software-properties-common apt-transport-https curl
student@node2:~$ sudo apt-get update && sudo apt-get install -y ca-certificates curl software-properties-common apt-transport-https curl
student@node1:~$ sudo curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
OK
student@node1:~$
student@node2:~$ sudo curl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | sudo apt-key add -
OK
student@node2:~$
student@node1:~$ sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
student@node1:~$
student@node2:~$ sudo tee /etc/apt/sources.list.d/kubernetes.list <<EOF
> deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
> EOF
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
student@node2:~$
student@node1:~$ sudo apt-get update
student@node2:~$ sudo apt-get update
student@node1:~$ sudo apt-get install -y kubelet kubeadm kubectl
student@node2:~$ sudo apt-get install -y kubelet kubeadm kubectl
3.将节点node1,node2 做为worker加入到kubernetes集群中:
student@node1:~$ sudo kubeadm join 192.168.19.100:6443 --token la82lq.j25bot0eopia3knp \
> --discovery-token-ca-cert-hash sha256:221335f0da68ce2395509d37f1abf5805b73a999ff4233f6c49d633aeb8fd63d
W0812 20:32:28.599890 3072 join.go:346] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
Running pre-flight checks
: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
Reading configuration from the cluster...
FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
Starting the kubelet
Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
student@node1:~$
student@node2:~$ sudo kubeadm join 192.168.19.100:6443 --token la82lq.j25bot0eopia3knp \
> --discovery-token-ca-cert-hash sha256:221335f0da68ce2395509d37f1abf5805b73a999ff4233f6c49d633aeb8fd63d
W0812 20:33:35.312105 3276 join.go:346] WARNING: JoinControlPane.controlPlane settings will be ignored when control-plane flag is not set.
Running pre-flight checks
: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
Reading configuration from the cluster...
FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
Starting the kubelet
Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
student@node2:~$
在master节点上查看所有node:
4.配置kubectl命令补全功能:kubectl可以tab补全命令
*****
student@master:~$ sudo apt-get install bash-completion
Reading package lists... Done
Building dependency tree
Reading state information... Done
bash-completion is already the newest version (1:2.10-1ubuntu1).
bash-completion set to manually installed.
0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded.
student@master:~$ source <(kubectl completion bash)
student@master:~$ echo "source <(kubectl completion bash)" >> ~/.bashrc
student@master:~$ source~/.bashrc
student@master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 35m v1.18.6
node1 Ready <none> 11m v1.18.6
node2 Ready <none> 10m v1.18.6
student@master:~$
查询系统带的NAMESPACE:
student@master:~$ kubectl get namespaces
NAME STATUS AGE
default Active 170m#默认安装的
kube-node-lease Active 170m
kube-public Active 170m
kube-system Active 170m# kubernetes的 资源
student@master:~$
kubernetes的POD 资源:
student@master:~$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-6d56c8448f-5zb5l 0/1 Pending 0 172m
coredns-6d56c8448f-bmwxb 0/1 Pending 0 172m
etcd-master 1/1 Running 0 172m
kube-apiserver-master 1/1 Running 0 172m
kube-controller-manager-master 1/1 Running 0 172m
kube-proxy-64fb4 1/1 Running 0 172m
kube-proxy-ckg72 1/1 Running 0 141m
kube-proxy-cxftl 1/1 Running 0 140m
kube-scheduler-master 1/1 Running 0 172m
student@master:~$
5.配置docker镜像阿里加速:
student@master:~$ cd /etc/docker/
student@master:/etc/docker$ ls
key.json
student@master:/etc/docker$ sudo touch daemon.json
student@master:/etc/docker$ sudo vim daemon.json
student@master:/etc/docker$ cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://i1pfdcu7.mirror.aliyuncs.com"]
}
student@master:/etc/docker$
student@node1:~$ cd /etc/docker/
student@node1:/etc/docker$ sudo touch daemon.json
student@node1:/etc/docker$ vim daemon.json
student@node1:/etc/docker$ sudo vim daemon.json
student@node1:/etc/docker$ cat daemon.json
{
"registry-mirrors": ["https://i1pfdcu7.mirror.aliyuncs.com"]
}
student@node1:/etc/docker$
student@node2:~$ cd /etc/docker/
student@node2:/etc/docker$ sudo touch daemon.json
student@node2:/etc/docker$ sudo vim daemon.json
student@node2:/etc/docker$ cat daemon.json
{
"registry-mirrors": ["https://i1pfdcu7.mirror.aliyuncs.com"]
}
student@node2:/etc/docker$
重启master,node1,node2节点
生产的POD不会调度到master节点:
student@master:~$ kubectl describe nodes master
Name: master
Roles: master
Labels: beta.kubernetes.io/arch=amd64
beta.kubernetes.io/os=linux
kubernetes.io/arch=amd64
kubernetes.io/hostname=master
kubernetes.io/os=linux
node-role.kubernetes.io/master=
Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
node.alpha.kubernetes.io/ttl: 0
projectcalico.org/IPv4Address: 192.168.19.100/24
projectcalico.org/IPv4IPIPTunnelAddr: 10.244.219.64
volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:Wed, 09 Sep 2020 11:27:04 +0800
Taints: node-role.kubernetes.io/master:NoSchedule# 污点
Unschedulable: false
Lease:
6.测试:
在node1,node2上下载docker image:
student@node1:~$ sudo docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
bf5952930446: Pull complete
ba755a256dfe: Pull complete
c57dd87d0b93: Pull complete
d7fbf29df889: Pull complete
1f1070938ccd: Pull complete
Digest: sha256:36b74457bccb56fbf8b05f79c85569501b721d4db813b684391d63e02287c0b2
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
student@node1:~$
root@node2:/home/student# docker pull nginx
Using default tag: latest
latest: Pulling from library/nginx
bf5952930446: Pull complete
ba755a256dfe: Pull complete
c57dd87d0b93: Pull complete
d7fbf29df889: Pull complete
1f1070938ccd: Pull complete
Digest: sha256:36b74457bccb56fbf8b05f79c85569501b721d4db813b684391d63e02287c0b2
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest
root@node2:/home/student#
创建pod:
student@master:~$ kubectl run --image=nginx --image-pull-policy=IfNotPresent --port=80 web-nginx
# ls
aliyum-kube-flannel.ymlDocuments Music pod6.yml Public
anaconda-ks.cfg doube-pod7.yml mysql-pvc.ymlpod-init.yml root@node2
chap4 Downloads mysql-pv.yml pod-iscsi.yml Templates
chap5 initial-setup-ks.cfg Pictures pod-run-yaml.yml test.yml
chap7 kube-flannel.yml pod1.yml pod-selector.yml Videos
crontab.yml kubernet-dashboard.yml pod2.yml pod-volume-1.yml wordpress-mysql.yml
dashboard-certs kubernetes-dashboard-account.yml pod3.yml pod-volume-emptyDir.ymlwordpress-pvc.yml
dc1.yml kubernetes-dashboard-role-bing.ymlpod4.yml pod-volume-hostPath.ymlwordpress-pv.yml
Desktop metrics-server.yaml pod5.yml pod-web5.yml wordpress.yml
#
# kubectl create -f metrics-server.yaml
# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7ff77c879f-2mckp 1/1 Running 102 53d
coredns-7ff77c879f-kv5d2 1/1 Running 102 53d
etcd-master.example.com 1/1 Running 19 53d
kube-apiserver-master.example.com 1/1 Running 3 6d3h
kube-controller-manager-master.example.com 1/1 Running 20 53d
kube-flannel-ds-amd64-lfh87 1/1 Running 20 53d
kube-flannel-ds-amd64-ltb2t 1/1 Running 19 53d
kube-flannel-ds-amd64-zjxkd 1/1 Running 21 53d
kube-proxy-6zcq8 1/1 Running 2 6d2h
kube-proxy-fznrc 1/1 Running 2 6d2h
kube-proxy-znrr9 1/1 Running 2 6d2h
kube-scheduler-master.example.com 1/1 Running 19 53d
kuboard-8b8574658-g2dmb 1/1 Running 5 6d18h
metrics-server-7f96bbcc66-8srjm 1/1 Running 8 6d18h
#
# kubectl get svc -n kube-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kube-dns ClusterIP 10.1.0.10 <none> 53/UDP,53/TCP,9153/TCP 53d
kuboard NodePort 10.1.186.34 <none> 80:32567/TCP 52d
metrics-server ClusterIP 10.1.38.12 <none> 443/TCP 52d
#
测试:
# kubectl top nodes
NAME CPU(cores) CPU% MEMORY(bytes) MEMORY%
master.example.com 479m 11% 1675Mi 21%
node1.example.com 141m 3% 693Mi 8%
node2.example.com 108m 2% 444Mi 5%
#
配置Kubenetes Dashboad WEB UI:
# cat kubernet-dashboard.yml
# Copyright 2017 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
apiVersion: v1
kind: Namespace
metadata:
name: kubernetes-dashboard
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
type: NodePort
ports:
- port: 443
targetPort: 8443
nodePort: 30008
selector:
k8s-app: kubernetes-dashboard
#---
#apiVersion: v1
#kind: Secret
#metadata:
#labels:
# k8s-app: kubernetes-dashboard
#name: kubernetes-dashboard-certs
#namespace: kubernetes-dashboard
#type: Opaque
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-csrf
namespace: kubernetes-dashboard
type: Opaque
data:
csrf: ""
---
apiVersion: v1
kind: Secret
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-key-holder
namespace: kubernetes-dashboard
type: Opaque
---
kind: ConfigMap
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard-settings
namespace: kubernetes-dashboard
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
rules:
# Allow Dashboard to get, update and delete Dashboard exclusive secrets.
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs", "kubernetes-dashboard-csrf"]
verbs: ["get", "update", "delete"]
# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.
- apiGroups: [""]
resources: ["configmaps"]
resourceNames: ["kubernetes-dashboard-settings"]
verbs: ["get", "update"]
# Allow Dashboard to get metrics.
- apiGroups: [""]
resources: ["services"]
resourceNames: ["heapster", "dashboard-metrics-scraper"]
verbs: ["proxy"]
- apiGroups: [""]
resources: ["services/proxy"]
resourceNames: ["heapster", "http:heapster:", "https:heapster:", "dashboard-metrics-scraper", "http:dashboard-metrics-scraper"]
verbs: ["get"]
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
rules:
# Allow Metrics Scraper to get metrics from the Metrics server
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: kubernetes-dashboard
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: kubernetes-dashboard
subjects:
- kind: ServiceAccount
name: kubernetes-dashboard
namespace: kubernetes-dashboard
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.0.1
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
- --auto-generate-certificates
- --namespace=kubernetes-dashboard
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
ports:
- port: 8000
targetPort: 8000
selector:
k8s-app: dashboard-metrics-scraper
---
kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: dashboard-metrics-scraper
name: dashboard-metrics-scraper
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: dashboard-metrics-scraper
template:
metadata:
labels:
k8s-app: dashboard-metrics-scraper
annotations:
seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
spec:
containers:
- name: dashboard-metrics-scraper
image: kubernetesui/metrics-scraper:v1.0.4
ports:
- containerPort: 8000
protocol: TCP
livenessProbe:
httpGet:
scheme: HTTP
path: /
port: 8000
initialDelaySeconds: 30
timeoutSeconds: 30
volumeMounts:
- mountPath: /tmp
name: tmp-volume
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
volumes:
- name: tmp-volume
emptyDir: {}
#
# mkdir dashboard-certs
# ls
aliyum-kube-flannel.ymlDownloads Pictures
anaconda-ks.cfg initial-setup-ks.cfg Public
dashboard-certs kube-flannel.yml root@node2
Desktop kubernetes-dashboard-v2.0.0.ymlTemplates
Documents Music Videos
# cd dashboard-certs/
创建项目:kubernetes-dashboard
# kubectl create namespace kubernetes-dashboard
namespace/kubernetes-dashboard created
# openssl genrsa -out dashboard.key 2048
Generating RSA private key, 2048 bit long modulus
...........+++
.................................+++
e is 65537 (0x10001)
# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
Signature ok
subject=/CN=dashboard-cert
Getting Private key
# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
secret/kubernetes-dashboard-certs created
# cd ..
# ls
aliyum-kube-flannel.ymlDownloads Pictures
anaconda-ks.cfg initial-setup-ks.cfg Public
dashboard-certs kube-flannel.yml root@node2
Desktop kubernetes-dashboard-v2.0.0.ymlTemplates
Documents Music Videos
#
# kubectl create -f kubernet-dashboard.yml
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
Error from server (AlreadyExists): error when creating "kubernet-dashboard.yml": namespaces "kubernetes-dashboard" already exists
# kubectl create -f kubernet-dashboard.yml
# kubectl create -f kubernetes-dashboard-account.yml
#kubectl create -f kubernetes-dashboard-role-bing.yml
获取 token:
# kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep dashboard-admin | awk '{print $1}')
https://192.168.19.101:30008
页:
[1]