系统运输
K8s集群构建( kubeadm方案) ) )。
1、最少三台Centos
a、至少2核CPU 2G内存20G硬盘
b、必须在同一网段
在本例中,分配如下:
大师: 10.170.0.7
工作器1:10.170.0.8
工作器2:10.170.0.9
2、确认ip addr是否被分配给IPV4地址。 如果没有的话,在nmtui、Automatically connect上打上勾
3、通过SSH连接
4、禁用防火墙,
systemctlstopfirewalldsystemctldisablefirewalld
5、禁用SELINUX
sed-is/selinux=enforcing/selinux=disabled//etc/selinux/config
setenforce 0
6、禁用交换
a、swapoff -a
b、注释掉vi /etc/fstab和swap
7、重新启动并启用服务器。
远程
8、添加k8s源时,建议使用蚂蚁或163的
cate of/etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
base URL=https://mirrors.a liyun.com/kubernetes/yum/repos/kubernetes-el7-x86 _ 64 /
启用=1
gpgcheck=1
repo_gpgcheck=1
gpg key=https://mirrors.a liyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.a liyun.com/kubernetes/yum
EOF
9、安装kubelet、kubeadm、kubectl
yum install-ykubectlkubeadmkubectl
10、kube-proxy开启ipvs
vi /etc/sysctl.d/k8s.conf
vm.swappiness=0
net.bridge.bridge-nf-call-IP6tables=1
net.bridge.bridge-nf-call-iptables=1
net.ipv4.ip_forward=1
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
cat/etc/sys config/modules/ipvs.modules eof
#! /bin/bash
modprobe -- ip_vs
modprobe -- ip_vs_rr
modprobe -- ip_vs_wrr
modprobe -- ip_vs_sh
modprobe -- nf_conntrack_ipv4
EOF
chmod 755/etc/sys config/modules/ipvs.modules bash/etc/sys config/modules/ipvs.modules lsmod|grep-EIP _ vs
接下来,您需要确保每个节点上都安装了ipset软件包
yum install -y ipset。 为了便于查看ipvs代理规则,建议安装管理工具
yum install -y ipvsadm。
11、安装文件库
yum install -y docker
systemctlstartdockersystemctlenabledocker
12、开机设置为启动kubelet
系统启用kube let
13、修改hostname (非常重要,如果hostname相同会引起问题) )。
hostnamectl set-hostname XXXX
14、初始化主节点
ubeadminit---kubernetes-version=v1. 15.3---apiserver-advertise-address=10.170.0.7-- pod-nete
当前,最新版本为1.15.3,在运行时,如果版本错误,则会显示错误的信息,并显示信息。
如果由于docker无法自动检索图像而导致timeout,可以通过两种方式解决
a、手动拉动,修改标签
b、修改解决文档源方式
15、手动拉动图像
dockerpullmirrorgooglecontainers/kube-apiserver:v1. 15.3
dockerpullmirrorgooglecontainers/kube-controller-manager:v1. 15.3
dockerpullmirrorgooglecontainers/kube-scheduler:v1. 15.3
dockerpullmirrorgooglecontainers/kube-proxy:v1. 15.3
dockerpullmirrorgooglecontainers/pause:3.1
dockerpullmirrorgooglecontainers/etcd:3.2.24
dockerpullcoredns/coredns:1.2.6
dockertagmirrorgooglecontainers/kube-proxy:v 1.13.0 k8s.gcr.io/kube-proxy:v 1.15.3
dockertagmirrorgooglecontainers/kube-scheduler:v 1.13.0 k8s.gcr.io/kube-scheduler:v 1.15.3
dockertagmirrorgooglecontainers/kube-apiserver:v 1.13.0 k8s.gcr.io/kube-apiserver:v 1.15.3
dockertagmirrorgooglecontainers/kube-controller-manager:v 1.15.3 k8s.gcr.io/kube-controller-manager:v1 .
dockertagmirrorgooglecontainers/etcd:3.2.24 k8s.gcr.io/etcd:3.2.24
dockertagcoredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6
dockertagmirrorgooglecontainers/pause:3.1k 8s.gcr.io/pause:3.1
请注意版本号问题,如果更新了,请相应进行更改。
16、修改文档源
ubeadminit---image-repository=mirrorgooglecontainers
17 .重新执行步骤15
18、启动成功后,可以看到集群启动成功,worker生成了加入集群的命令。 通过worker运行即可加入集群
kubeadmjoin 10.170.0.7:6443-- tokenxrhtyd.b 61 v0mzuu 6ce A8 qg\\
- discovery-token-ca-cert-hash sha 256:2e 5cb 96 ef0be0e 791 ACB 923 cf 371303 b 13 BDA 4613 B4 FD 6d 11 a 59 BC 17 a7F8 C3 DD
19、执行以下命令
mkdir -p $HOME/.kube
SDO CP-I/etc/kubernetes/admin.conf $ home/. kube/config
sdochown$(id-u ):$ ) $(id -g ) $HOME/.kube/config
20、此时,输入kubectl get cs即可查看服务运行情况,确保一切正常运行
21、安装网络管理员(采用官方推荐的flannel方案)。 其中,pod网段必须与以前init时设置的pod-network-cidr一致) ) ) )。
yum install -y wget
wget 3359 raw.github user content.com/core OS/flannel/master/documentation/kube-flannel.ymlkubectlapply-fku be
22、学好worker join
这样就构建了集群。 此时,您可以在kubectl get nodes中查看每个节点的行为
安装属性
1、在每个节点上下载支持docker的镜像
docker pull prom/node-exporter
dockerpullprom/Prometheus:v 2.0.0
dockerpullgrafana/grafa na:4.2.0
2、以daemonset方式引入node-export组件
vi node-exporter.yaml
---
API version :扩展/v1 beta 1
kind: DaemonSet
元数据:
name: node-exporter
namespace: kube-system
labels :
k8s-app: node-exporter
spec :
template :
元数据:
labels :
k8s-app: node-exporter
spec :
containers :
- image: prom/node-exporter
name: node-exporter
端口:
-容器端口: 9100
protocol: TCP
name: http
---
apiVersion: v1
kind :服务
元数据:
labels :
k8s-app: node-exporter
name: node-exporter
namespace: kube-system
spec :
端口:
- name: http
端口: 9100
节点端口: 31672
protocol: TCP
type: NodePort
选择器:
k8s-app: node-exporter
ubectlcreate-fnode-exporter.YAML
3、引入prometheus组件
vi rabc-setup.yaml
API version:RBAC.authorization.k8s.io/v1
kind: ClusterRole
元数据:
名称: Prometheus
rules :
- apiGroups: []
资源:
-诺多斯
- nodes/proxy
服务
- endpoints
- pods
verbs: [get,list,watch]
- apiGroups :
-扩展
资源:
- ingresses
verbs: [get,list,watch]
- nonResourceURLs: [/metrics]
verbs: [get]
---
apiVersion: v1
kind :服务帐户
元数据:
名称: Prometheus
namespace: kube-system
---
API version:RBAC.authorization.k8s.io/v1
kind: ClusterRoleBinding
元数据:
名称: Prometheus
roleRef :
apigroup:RBAC.authorization.k8s.io
kind: ClusterRole
名称: Prometheus
subjects :
- kind :服务帐户
名称: Prometheus
namespace: kube-system
4、以configmap形式管理prometheus组件的配置文件
vi configmap.yaml
apiVersion: v1
kind:config地图
元数据:
name: prometheus-config
namespace: kube-system
data :
prometheus.yml: |
global :
scrape_interval: 15s
evaluation_interval: 15s
scrape_configs :
-job_name:\\&; quot; kubernetes-APIservers\\&; quot;
kubernetes_sd_configs :
- role: endpoints
scheme: https
tls_config :
ca _ file:/var/run/secrets/kubernetes.io/service account/ca.CRT
bearer _ token _ file:/var/run/secrets/kubernetes.io/service account/token
relabel_configs :
- source _ labels:[ _ _ meta _ kubernetes _ namespace,__meta_kubernetes_service_name,_ _ meta _ ku kubebete
动作: keep
regex: default; kubernetes; https
-job_name:\\&; quot; kubernetes-nodes\\&; quot;
kubernetes_sd_configs :
- role: node
scheme: https
tls_config :
ca _ file:/var/run/secrets/kubernetes.io/service account/ca.CRT
bearer _ token _ file:/var/run/secrets/kubernetes.io/service account/token
relabel_configs :
-操作:标签图
regex:_ _ meta _ kubernetes _ node _ label _ (.)
- target_label: __address__
replacement:kubernetes.default.SVC:443
- source _ labels:[ _ _ meta _ kubernetes _ node _ name ]
regex:( ).)。
target_label: __metrics_path__
replacement:/API/v1/nodes/$ {1}/proxy/metrics
-job_name:\\&; quot; kubernetes-cadvisor\\&; quot;
kubernetes_sd_configs :
- role: node
scheme: https
tls_config :
ca _ file:/var/run/secrets/kubernetes.io/service account/ca.CRT
bearer _ token _ file:/var/run/secrets/kubernetes.io/service account/token
relabel_configs :
-操作:标签图
regex:_ _ meta _ kubernetes _ node _ label _ (.)
- target_label: __address__
replacement:kubernetes.default.SVC:443
- source _ labels:[ _ _ meta _ kubernetes _ node _ name ]
regex:( ).)。
target_label: __metrics_path__
replacement:/API/v1/nodes/$ {1}/proxy/metrics/c advisor
-job_name:\\&; quot; kubernetes-service-end points\& amp; quot;
kubernetes_sd_configs :
- role: endpoints
relabel_configs :
- source _ labels:[ _ _ meta _ kubernetes _ service _ annotation _ Prometheus _ io _ scrape ]
动作: keep
regex: true
- source _ labels:[ _ _ meta _ kubernetes _ service _ annotation _ Prometheus _ io _ scheme ]
操作: replace
target_label: __scheme__
regex:(https? )
- source _ labels:[ _ _ meta _ kubernetes _ service _ annotation _ Prometheus _ io _ path ]
操作: replace
target_label: __metrics_path__
regex:( ).)。
- source_labels: [__address__,_ _ meta _ kubernetes _ service _ annotation _ Prometheus _ io _ port ]
操作: replace
target_label: __address__
regex:([^:] ) ) )?\\\\d? () ) d )
替换: $1: $ 2
-操作:标签图
regex:_ _ meta _ kubernetes _ service _ label _ (.)
- source _ labels:[ _ _ meta _ kubernetes _ namespace ]
操作: replace
target _ label:kubernetes _ namespace
- source _ labels:[ _ _ meta _ kubernetes _ service _ name ]
操作: replace
target_label: kubernetes_name
-job_name:\\&; quot; kubernetes-services\\&; quot;
kubernetes_sd_configs :
-role :服务
metrics_path: /probe
参数:
模式: [ http _ 2xx ]
relabel_configs :
- source _ labels:[ _ _ meta _ kubernetes _ service _ annotation _ Prometheus _ io _ probe ]
动作: keep
regex: true
- source_labels: [__address__]
target_label: __param_target
- target_label: __address__
replacement:black box-exporter.example.com:9115
- source _ labels:[ _ _ param _ target ]
target_label: instance
-操作:标签图
regex:_ _ meta _ kubernetes _ service _ label _ (.)
- source _ labels:[ _ _ meta _ kubernetes _ namespace ]
target _ label:kubernetes _ namespace
- source _ labels:[ _ _ meta _ kubernetes _ service _ name ]
target_label: kubernetes_name
-job_name:\\\&; quot; kubernetes-ingresses\\&; quot;
kubernetes_sd_configs :
- role: ingress
relabel_configs :
- source _ labels:[ _ _ meta _ kubernetes _ ingress _ annotation _ Prometheus _ io _ probe ]
动作: keep
regex: true
- source _ labels:[ _ _ meta _ kubernetes _ ingress _ scheme,__address__,_ meta _ kubernetes _ Ingres
regex:( ) ); (); ().) ) ) ) )。
replacement: ${1}://${2}${3}
target_label: __param_target
- target_label: __address__
replacement:black box-exporter.example.com:9115
- source _ labels:[ _ _ param _ target ]
target_label: instance
-操作:标签图
regex:_ _ meta _ kubernetes _ ingress _ label _ (.)
- source _ labels:[ _ _ meta _ kubernetes _ namespace ]
target _ label:kubernetes _ namespace
- source _ labels:[ _ _ meta _ kubernetes _ ingress _ name ]
target_label: kubernetes_name
-job_name:\\&; quot; kubernetes-pods\\&; quot;
kubernetes_sd_configs :
- role: pod
relabel_configs :
- source _ labels:[ _ _ meta _ kubernetes _ pod _ annotation _ Prometheus _ io _ scrape ]
动作: keep
regex: true
- source _ labels:[ _ _ meta _ kubernetes _ pod _ annotation _ Prometheus _ io _ path ]
操作: replace
target_label: __metrics_path__
regex:( ).)。
- source_labels: [__address__,_ _ meta _ kubernetes _ pod _ annotation _ promet Hess _ io _ port ]
操作: replace
regex:([^:] ) ) )?\\\\d? () ) d )
替换: $1: $ 2
target_label: __address__
-操作:标签图
regex:_ _ meta _ kubernetes _ pod _ label _ (.)
- source _ labels:[ _ _ meta _ kubernetes _ namespace ]
操作: replace
target _ label:kubernetes _ namespace
- source _ labels:[ _ _ meta _ kubernetes _ pod _ name ]
操作: replace
target _ label:kubernetes _ pod _ name
5、部署文件
vi prometheus.deploy.yml
API version:apps/v1 beta 2
kind :部署
元数据:
labels :
名称: Prometheus-deployment
名称: Prometheus
namespace: kube-system
spec :
replicas: 1
选择器:
匹配标签:
APP:Prometheus
template :
元数据:
labels :
APP:Prometheus
spec :
containers :
- image: prom/prometheus:v2.0.0
名称: Prometheus
command :
- /bin/prometheus
args :
--config.file=/etc/Prometheus/Prometheus.yml
--- storage.tsdb.path=/Prometheus
- --storage.tsdb.retention=24h
端口:
-容器端口: 9090
protocol: TCP
volumeMounts :
- mountPath: /prometheus
name: data
- mountPath: /etc/prometheus
name: config-volume
资源:
requests :
cpu: 100m米
内存: 100 mi
limits :
cpu: 500m米
内存: 2500 mi
serviceAccountName: prometheus
volumes :
- name: data
emptyDir: {}
- name: config-volume
配置图:
name: prometheus-config
6、属性服务文件
vi prometheus.svc.yml
kind :服务
apiVersion: v1
元数据:
labels :
APP:Prometheus
名称: Prometheus
namespace: kube-system
spec :
type: NodePort
端口:
-端口: 9090
目标端口: 9090
nodePort: 30003
选择器:
APP:Prometheus
7、建立档案
kubectl create -f rbac-setup.yaml
kubectl create -f configmap.yaml
kubectlcreate-fprometheus.deploy.yml
ubectlcreate-fprometheus.SVC.yml
详情请访问云服务器、域名注册、虚拟主机的问题,请访问西部数码代理商官方网站: www.chenqinet.cn