陈奇网络工作室

Kubernetes二进制部署多节点部署

云计算

在开始本实验之前,您需要部署单部分主k8s群集

单节点部署博客地址:

3359 blog.51cto.com/14449528/2469980

主群集体系结构图:

大师2的导入

1、优先关闭主控2的防火墙服务

[ root @ master2~~] # systemctlstopfirewalld.service

[root@master2 ~]# setenforce 0

2、在主机1上操作,将kubernetes目录、server组件复制到主机2

[ root @ master1k8s ] # scp-r/opt/kubernetes/root @ 192.168.18.140:/opt

[ root @ master1k8s ] # scp/usr/lib/systemd/system/{ kube-apiserver,kube-controller-manager,kube-scheduler

3、修改master02的配置文件

[ root @ master2~~] # CD/opt/kubernetes/CFG /

[ root @ master2CFG ] # vim kube-apiserver

5-- bind-address=192.168.18.140\\

7-- advertise-address=192.168.18.140\\

第#5行和第7行的IP地址必须变更为主2的地址

4、将主机1上现有的etcd证书复制到主机2上使用

(注意:主2必须具有etcd证书。 否则,apiserver服务无法启动)

[ root @ master1k8s ] # scp-r/opt/etcd/root @ 192.168.18.132:/opt /

root@192.168.18.132\\\& quot; spassword :

etcd 100% 516 535.5KB/s 00:00

etcd 100% 18MB 90.6MB/s 00:00

etcdctl 100% 15MB 80.5MB/s 00:00

ca-key.pem 100% 1675 1.4MB/s 00:00

ca.pem 100% 1265 411.6KB/s 00:00

server-key.PEM 100 792.0 MB/s00:00

server.PEM10038429.6kb/s00:00

5、启动主控2三个组件服务

[ root @ master2CFG ] # systemctlstartkube-apiserver.service # #启动服务

[ root @ master2CFG ] # systemctlenablekube-apiserver.service # #服务的电源将打开

[ root @ master2CFG ] # systemctlstartkube-controller-manager.service

[ root @ master2CFG ] # systemctlenablekube-controller-manager.service

[ root @ master2CFG ] # systemctlstartkube-scheduler.service

[ root @ master2CFG ] # systemctlenablekube-scheduler.service

6、修正环境变量

[ root @ master2CFG ] # vim/etc/profile

export path=$ path:/opt/kubernetes/bin/# #添加环境变量

[ root @ master2CFG ] #更新source/etc/profile # #配置文件

[ root @ master2CFG ] # kubectlgetnode # #显示群集节点信息

name status roles age版本

192.168.18.129就绪non e21 HV1. 12.3

192.168.18.130就绪none 22hv1. 12.3

#此时,可以知道node1和node2的加入情况

3354此时主机2的部署完成3354

部署Nginx负载平衡

lb01和lb02进行相同的操作

安装nginx服务,并将nginx.sh和keepalived.conf脚本复制到家庭目录中

[root@localhost ~]# ls

anaconda-ks.cfg? keepalived.conf? 公共? 视频? 文档? 音乐

initial-setup-ks.cfg? nginx.sh? 模板? 照片? 下载? 桌面

[ root @ lb1~] # systemctlstopfirewalld.service

[root@lb1 ~]# setenforce 0

[ root @ lb1~] # vim/etc/yum.repos.d/nginx.repo

[nginx]

name=nginx repo

base URL=http://nginx.org/packages/centos/7/$ basearch /

gpgcheck=0

重新装入yum仓库

[root@lb1 ~]# yum list

安装nginx服务

[root@lb1 ~]# yum install nginx -y

[ root @ lb1~] # vim/etc/nginx/nginx.conf

在第#12行插入流模块

流{

log_formatmain\\\& quot; $ remote _ addr $ upstream _ addr-[ $ time _ local ] $ status $ upstream _ bytes _ sent _ sent

access _ log/var/log/nginx/k8s-access.log main;

upstream k8s-apiserver {

server 192.168.18.128:6443; #这里是主1的ip地址

server 192.168.18.140:6443 #这里是主2的ip地址

}

服务器{

listen 6443;

proxy_pass k8s-apiserver;

}

}

##检测语法

[root@lb1 ~]# nginx -t

nginx:theconfigurationfile/etc/nginx/nginx.confsyntaxisok

nginx:configuration file/etc/nginx/nginx.conftestissuccessful

#修改主页进行区分

[ root @ lb1~] # CD/usr/share/nginx/html /

[root@lb1 html]# ls

50x.html index.html

[root@lb1 html]# vim index.html

14 h2Welcome to mater nginx! 在/h2 #14行中添加master进行区分

[ root @ lb2~] # CD/usr/share/nginx/html /

[root@lb2 html]# ls

50x.html index.html

[root@lb1 html]# vim index.html

14 h2Welcome to backup nginx! 在/h2 #14行中添加backup进行区分

#开始服务

[ root @ lb1~] # systemctlstartnginx

[ root @ lb2~] # systemctlstartnginx

浏览器验证访问并输入192.168.18.150即可访问master的nginx主页

在浏览器中验证访问并输入192.168.18.151可以访问backup的nginx主页

keepalived安装的导入

lb01和lb02的操作相同

1、安装keeplived

[ root @ lb1 html ] # yuminstallkeepalived-y

2、修改个人资料

[root@lb1~]# ls

anaconda-ks.cfg? keepalived.conf? 公共? 视频? 文档? 音乐

initial-setup-ks.cfg? nginx.sh? 模板? 照片? 下载? 桌面

[ root @ lb1~] # CP keepalived.conf/etc/keepalived/keepalived.conf

是否涵盖CP:/etc/keepalived/keepalived.conf? 是

[ root @ lb1~] # vim/etc/keepalived/keepalived.conf?

#lb01的主机结构如下。

! 配置文件for keepalived

global_defs {

? #接收电子邮件地址

? notification_email {

? acassen@firewall.loc

? failover@firewall.loc

? sysadmin@firewall.loc

? }

? #邮件收件人

? notification _ email _ from Alexandre.cassen @ firewall.loc

? smtp_server 127.0.0.1

? smtp_connect_timeout 30

? router_id NGINX_MASTER

}

vrrp_script check_nginx {

? script/etc/nginx/check _ nginx.sh

}

vrrp_instance VI_1 {

? state MASTER?

? 界面ens 33

? virtual_router_ID 51 # VRRP根id实例,每个实例是唯一的

? 优先级100? #优先级,备用服务器设置90?

? advert_int 1? 指定VRRP心跳通知间隔,默认为1秒

? 身份验证{?

? auth_type PASS

? auth_pass 1111

? }

? virtual_ipaddress {

? 192.168.18.100/24

? }

? track_script {

? check_nginx

? }

}

#lb02的备份结构如下。 配置文件for keepalived

global_defs {

? #接收电子邮件地址

? notification_email {

? acassen@firewall.loc

? failover@firewall.loc

? sysadmin@firewall.loc

? }

? #邮件收件人

? notification _ email _ from Alexandre.cassen @ firewall.loc

? smtp_server 127.0.0.1

? smtp_connect_timeout 30

? router_id NGINX_MASTER

}

vrrp_script check_nginx {

? script/etc/nginx/check _ nginx.sh

}

vrrp_instance VI_1 {

? 斯塔特? 备份?

? 界面ens 33

? virtual_router_ID 51 # VRRP根id实例,每个实例是唯一的

? 优先级90? #优先级,备用服务器设置90?

? advert_int 1? 指定VRRP心跳通知间隔,默认为1秒

? 身份验证{?

? auth_type PASS

? auth_pass 1111

? }

? virtual_ipaddress {

? 192.168.18.100/24

? }

? track_script {

? check_nginx

? }

}

3、编写管理脚本

[ root @ lb1~] # vim/etc/nginx/check _ nginx.sh

count=$ ( PS-ef|grep nginx|egrep-cv grep|$ $ )

if [ $count -eq 0 ]; then

? 系统停止保持

fi

4、赋予执行权限,启动服务

[ root @ lb1~] # chmodx/etc/nginx/check _ nginx.sh

[ root @ lb1~] # systemctlstartkeepalived

5、查看地址信息

lb01地址信息

[root@lb1 ~]# ip a

1: lo: LOOPBACK,UP,lower _ up MTU 65536 qdiscnoqueuestateunknownqlen 1

? link/loopback 00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00

? inet 127.0.0.1/8 scope host lo

? valid _ lftforeverpreferred _ lft forever

? inet6 :1/128 scope host?

? valid _ lftforeverpreferred _ lft forever

2: ens33: BROADCAST,MULTICAST,UP,lower _ up MTU 1500 qdisc pfifo _ faststateupqlen 1000

? link/ether 00:0c:29:ba:E6:18br dff:ff:ff:ff:ff:ff:ff

? inet 192.168.18.150/24 brd 192.168.35.255 scopeglobalens 33

? valid _ lftforeverpreferred _ lft forever

? inet 192.168.18.100/24 scopeglobalsecondaryens 33? #漂移地址在lb01上吗?

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80: 6ec5:6D7:1b 18:466 e/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80:2a3:b621:ca01:463 e/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80: d4e2: ef9e:6820:145 a/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

3: virbr0: NO-CARRIER、BROADCAST、MULTICAST、up MTU 1500 qdiscnoqueuestatedownqlen 1000

? link/ether 52:54:00:14:39:99br dff:ff:ff:ff:ff:ff:ff

? inet 192.168.122.1/24 brd 192.168.122.255 scopeglobalvirbr 0

? valid _ lftforeverpreferred _ lft forever

4: virbr0-nic: BROADCAST,multicast MTU 1500 qdisc pfifo _ fastmastervirbr0statedownqlen 1000

? link/ether 52:54:00:14:39:99br dff:ff:ff:ff:ff:ff:ff

lb02地址信息

[root@lb2 ~]# ip a

1: lo: LOOPBACK,UP,lower _ up MTU 65536 qdiscnoqueuestateunknownqlen 1

? link/loopback 00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00

? inet 127.0.0.1/8 scope host lo

? valid _ lftforeverpreferred _ lft forever

? inet6 :1/128 scope host?

? valid _ lftforeverpreferred _ lft forever

2: ens33: BROADCAST,MULTICAST,UP,lower _ up MTU 1500 qdisc pfifo _ faststateupqlen 1000

? link/ether 00:0c:29:1d:EC:b0br dff:ff:ff:ff:ff:ff:ff

? inet 192.168.18.151/24 brd 192.168.35.255 scopeglobalens 33

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80: 6ec5:6D7:1b 18:466 e/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80:2a3:b621:ca01:463 e/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80: d4e2: ef9e:6820:145 a/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

3: virbr0: NO-CARRIER、BROADCAST、MULTICAST、up MTU 1500 qdiscnoqueuestatedownqlen 1000

? link/ether 52:54:00:14:39:99br dff:ff:ff:ff:ff:ff:ff

? inet 192.168.122.1/24 brd 192.168.122.255 scopeglobalvirbr 0

? valid _ lftforeverpreferred _ lft forever

4: virbr0-nic: BROADCAST,multicast MTU 1500 qdisc pfifo _ fastmastervirbr0statedownqlen 1000

? link/ether 52:54:00:14:39:99br dff:ff:ff:ff:ff:ff:ff

6、测试故障时转移切换

使Ib01故障,确认地址的漂移

[root@lb1 ~]# pkill nginx

[ root @ lb1~] # systemctlstatusnginx

nginx.service-nginx-highperformanceweb服务器

? loaded:loaded (/usr/lib/systemd/system/nginx.service; 被禁用; vendor preset: disabled )

? 活动:故障( result:exit-code ) since六2020-02-08 16:54:45 CST; 11s ago

? Docs: http://nginx.org/en/docs/

? 进程: 13156 exec stop=/bin/kill-sterm $ main PID ( code=exited,status=1/FAILURE ) )。

? mainPID:6930(code=exited,status=0/SUCCESS ) ) )。

[ root @ localhost~] # systemctlstatuskeepalived.service? #keepalived服务也将关闭,表示启用了nginx的check_nginx.sh

keepalived.service-lvsandvrrphighavailabilitymonitor

? loaded:loaded (/usr/lib/systemd/system/keepalived.service; 被禁用; vendor preset: disabled )

? 活动:不活动( dead )。

查看Ib01地址:

[root@lb1 ~]# ip a

1: lo: LOOPBACK,UP,lower _ up MTU 65536 qdiscnoqueuestateunknownqlen 1

? link/loopback 00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00

? inet 127.0.0.1/8 scope host lo

? valid _ lftforeverpreferred _ lft forever

? inet6 :1/128 scope host?

? valid _ lftforeverpreferred _ lft forever

2: ens33: BROADCAST,MULTICAST,UP,lower _ up MTU 1500 qdisc pfifo _ faststateupqlen 1000

? link/ether 00:0c:29:ba:E6:18br dff:ff:ff:ff:ff:ff:ff

? inet 192.168.18.150/24 brd 192.168.35.255 scopeglobalens 33

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80: 6ec5:6D7:1b 18:466 e/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80:2a3:b621:ca01:463 e/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80: d4e2: ef9e:6820:145 a/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

3: virbr0: NO-CARRIER、BROADCAST、MULTICAST、up MTU 1500 qdiscnoqueuestatedownqlen 1000

? link/ether 52:54:00:14:39:99br dff:ff:ff:ff:ff:ff:ff

? inet 192.168.122.1/24 brd 192.168.122.255 scopeglobalvirbr 0

? valid _ lftforeverpreferred _ lft forever

4: virbr0-nic: BROADCAST,multicast MTU 1500 qdisc pfifo _ fastmastervirbr0statedownqlen 1000

? link/ether 52:54:00:14:39:99br dff:ff:ff:ff:ff:ff:ff

查看Ib02地址:

[root@Ib2 ~]# ip a

1: lo: LOOPBACK,UP,lower _ up MTU 65536 qdiscnoqueuestateunknownqlen 1

? link/loopback 00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00

? inet 127.0.0.1/8 scope host lo

? valid _ lftforeverpreferred _ lft forever

? inet6 :1/128 scope host?

? valid _ lftforeverpreferred _ lft forever

2: ens33: BROADCAST,MULTICAST,UP,lower _ up MTU 1500 qdisc pfifo _ faststateupqlen 1000

? link/ether 00:0c:29:1d:EC:b0br dff:ff:ff:ff:ff:ff:ff

? inet 192.168.18.151/24 brd 192.168.35.255 scopeglobalens 33

? valid _ lftforeverpreferred _ lft forever

? inet 192.168.18.100/24 scopeglobalsecondaryens 33? #漂移地址转移到lb02

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80: 6ec5:6D7:1b 18:466 e/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80:2a3:b621:ca01:463 e/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80: d4e2: ef9e:6820:145 a/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

3: virbr0: NO-CARRIER、BROADCAST、MULTICAST、up MTU 1500 qdiscnoqueuestatedownqlen 1000

? link/ether 52:54:00:14:39:99br dff:ff:ff:ff:ff:ff:ff

? inet 192.168.122.1/24 brd 192.168.122.255 scopeglobalvirbr 0

? valid _ lftforeverpreferred _ lft forever

4: virbr0-nic: BROADCAST,multicast MTU 1500 qdisc pfifo _ fastmastervirbr0statedownqlen 1000

? link/ether 52:54:00:14:39:99br dff:ff:ff:ff:ff:ff:ff

恢复操作,在Ib01上启动nginx和keepalived服务

[ root @ localhost~] # systemctlstartnginx

[ root @ localhost~] # systemctlstartkeepalived.service?

[root@localhost ~]# ip a

1: lo: LOOPBACK,UP,lower _ up MTU 65536 qdiscnoqueuestateunknownqlen 1

? link/loopback 00:00:00:00:00 brd 00:00:00:00:00:00:00:00:00:00

? inet 127.0.0.1/8 scope host lo

? valid _ lftforeverpreferred _ lft forever

? inet6 :1/128 scope host?

? valid _ lftforeverpreferred _ lft forever

2: ens33: BROADCAST,MULTICAST,UP,lower _ up MTU 1500 qdisc pfifo _ faststateupqlen 1000

? link/ether 00:0c:29:ba:E6:18br dff:ff:ff:ff:ff:ff:ff

? inet 192.168.35.104/24 brd 192.168.35.255 scopeglobalens 33

? valid _ lftforeverpreferred _ lft forever

? inet 192.168.35.200/24 scopeglobalsecondaryens 33? #漂移地址又转移到lb01

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80: 6ec5:6D7:1b 18:466 e/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80:2a3:b621:ca01:463 e/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

? inet 6fe 80: d4e2: ef9e:6820:145 a/64 scopelinktentativedadfailed?

? valid _ lftforeverpreferred _ lft forever

3: virbr0: NO-CARRIER、BROADCAST、MULTICAST、up MTU 1500 qdiscnoqueuestatedownqlen 1000

? link/ether 52:54:00:14:39:99br dff:ff:ff:ff:ff:ff:ff

? inet 192.168.122.1/24 brd 192.168.122.255 scopeglobalvirbr 0

? valid _ lftforeverpreferred _ lft forever

4: virbr0-nic: BROADCAST,multicast MTU 1500 qdisc pfifo _ fastmastervirbr0statedownqlen 1000

? link/ether 52:54:00:14:39:99br dff:ff:ff:ff:ff:ff:ff

因为漂移地址位于lb01中,所以访问漂移地址时的现实nginx首页应该包含master

节点绑定VIP地址

1、修改节点配置文件的统一VIP

[ root @ localhost~] # vim/opt/kubernetes/CFG/bootstrap.kube config

[ root @ localhost~] # vim/opt/kubernetes/CFG/kube let.kube config

[ root @ localhost~] # vim/opt/kubernetes/CFG/kube-proxy.kube config

#全部变更为VIP地址

服务器: https://192.168.18.100:6443

2、更换后直接完成自检,重新启动服务

[ root @ node1~ ] # CD/opt/kubernetes/CFG /

[root@node1 cfg]# grep 100 *

bootstrap.kube config:server:https://192.168.18.100:6443

kube let.kube config:server:https://192.168.18.100:6443

kube-proxy.kube config:server:https://192.168.18.100:6443

[ root @ node1CFG ] # systemctlrestartkubelet.service

[ root @ node1CFG ] # systemctlrestartkube-proxy.service

3、在lb01上查看nginx的k8s日志

[ root @ lb1~] # tail/var/log/nginx/k8s-access.log

192.168.18.130192.168.18.128:6443-[ 07/feb/2020:14:18:540800 ] 2001 119

192.168.18.130192.168.18.140:6443-[ 07/feb/2020:14:18:540800 ] 2001 119

192.168.18.129192.168.18.128:6443-[ 07/feb/2020:14:18:570800 ] 2001 120

192.168.18.129192.168.18.140:6443-[ 07/feb/2020:14:18:570800 ] 2001 120

4、靠模1操作

#测试创建pod

[ root @ master1~~] # kubectlrunnginx-- image=nginx

ubectlrun---- generator=deployment/apps.v1 beta1isdeprecatedandwillberemovedinafutureversion.usekubectlcreateinsteated

deployment.apps/nginx created

#查看状态

[root@master1 ~]# kubectl get pods

name就绪状态restarts age

nginx-dbddb 74 b8-7 hdf j0/1 container creating 032 s

#当前,ContainerCreating正在创建状态

[root@master1 ~]# kubectl get pods

name就绪状态restarts age

nginx-dbddb 74 b8-7 hdfj1/1running 073 s

#此时的状态为Running,表示创建完成并正在执行

#注意:日志问题

[ root @ master1~ ] # kubectllogsnginx-dbddb 74 b8-7 hdfj

errorfromserver(forbidden ):forbidden ) user=system:anonymous,verb=get,resource=nodes,subresource=proxy

#此时无法查看日志。 必须打开权限

#绑定群集中的匿名用户提供管理员权限

[ root @ master1~~] # kubectlcreateclusterrolebindingcluster-system-anonymous-- cluster role=cluster-admin-user

clusterrolebinding.RBAC.authorization.k8s.io/cluster-system-anonymous created

[ root @ master1~ ] # kubectllogsnginx-dbddb 74 b8-7 hdfj #此时不报告错误

浏览pod网络#

[ root @ master1~ ] # kubectlgetpods-o wide

namereadystatusrestartsageipnodenominatednode

nginx-dbddb 74 b8-7 hdfj1/1running 020 m 172.17.32.2192.168.18.129 none

5、在相应网段的node1节点上操作可以直接访问

[root@node1 ~]# curl 172.17.32.2

! DOCTYPE html

html

头儿

titleWelcome to nginx! /title

斯泰尔斯

body {

width: 35em;

边距: 0自动;

font-family: Tahoma,Verdana,Arial,sans-serif;

}

/style

/head

身体

H2欢迎使用! /h2

pIf you see this page,thenginxwebserverissuccessfullyinstalledand

working.furtherconfigurationisrequired./p

poronlinedocumentationandsupportpleasereferto

a href=http://nginx.org/nginx.org/a.br /

Commercial support is available at

a href=http://nginx.com/nginx.com/a./p

pemThank you for using nginx./em/p

/body

/html

#此时看到的是容器中的nginx的信息

访问将生成日志,您可以返回主机1查看日志

[ root @ master1~ ] # kubectllogsnginx-dbddb 74 b8-7 hdfj

172.17.32.1---[ 07/feb/2020:06:52:530000 ] get/http/1.1200612-curl/7.29.0 -

#此时,可以看到节点1使用网关( 172.17.32.1 )访问的记录

详情请访问云服务器、域名注册、虚拟主机的问题,请访问西部数码代理商官方网站: www.chenqinet.cn

相关推荐

后台-系统设置-扩展变量-手机广告位-内容页底部广告位3