centos7部署kubernetes集群

  • by

摘要:

  • 首先kubernetes得官方文档我自己看着很乱,信息很少,所以结合了很多博客搭建的
  • 其次因为既然用到docker,当然离不开kubernetes管理,还有swarm,前者管理复杂,但功能齐全
  • 这里仅仅是安装部署,还未使用,具体使用出现问题后续更新

前提条件

系统是centos7上 关闭防火墙 systemctl stop firewalld.service 关闭selinux vi /etc/selinux/comfig

主要组件说明

Kubernetes 集群中主要存在两种类型的节点,分别是 master 节点,以及 minion 节点。
Minion 节点是实际运行 Docker 容器的节点,负责和节点上运行的 Docker 进行交互,并且提供了代理功能。
Master 节点负责对外提供一系列管理集群的 API 接口,并且通过和 Minion 节点交互来实现对集群的操作管理。

apiserver:用户和 kubernetes 集群交互的入口,封装了核心对象的增删改查操作,提供了 RESTFul 风格的 API 接口,通过 etcd 来实现持久化并维护对象的一致性。

scheduler:负责集群资源的调度和管理,例如当有 pod 异常退出需要重新分配机器时,scheduler 通过一定的调度算法从而找到最合适的节点。

controller-manager:主要是用于保证 replicationController 定义的复制数量和实际运行的 pod 数量一致,另外还保证了从 service 到 pod 的映射关系总是最新的。

kubelet:运行在 minion 节点,负责和节点上的 Docker 交互,例如启停容器,监控运行状态等。

proxy:运行在 minion 节点,负责为 pod 提供代理功能,会定期从 etcd 获取 service 信息,并根据 service 信息通过修改 iptables 来实现流量转发(最初的版本是直接通过程序提供转发功能,效率较低。),将流量转发到要访问的 pod 所在的节点上去。

etcd:key-value键值存储数据库,用来存储kubernetes的信息的。

flannel:Flannel 是 CoreOS 团队针对 Kubernetes 设计的一个覆盖网络(Overlay Network)工具,需要另外下载部署。我们知道当我们启动 Docker 后会有一个用于和容器进行交互的 IP 地址,如果不去管理的话可能这个 IP 地址在各个机器上是一样的,并且仅限于在本机上进行通信,无法访问到其他机器上的 Docker 容器。Flannel 的目的就是为集群中的所有节点重新规划 IP 地址的使用规则,从而使得不同节点上的容器能够获得同属一个内网且不重复的 IP 地址,并让属于不同节点上的容器能够直接通过内网 IP 通信。

1、环境介绍及准备:

1.1 物理机操作系统

  物理机操作系统采用Centos7.5 64位,细节如下。

[root@master ~]# uname -a
Linux master 4.19.12-1.el7.elrepo.x86_64 #1 SMP Fri Dec 21 11:06:36 EST 2018 x86_64 x86_64 x86_64 GNU/Linux
[root@master ~]# cat /etc/redhad-release
CentOS Linux release 7.6.1810 (Core)


1.2 主机信息

  本文准备了三台机器用于部署k8s的运行环境,细节如下:

节点及功能主机名IP
Master、etcd、registrymaster192.168.200.202
Node1node1192.168.200.203
Node2node2192.168.200.204

设置三台机器的主机名:

  Master上执行:

[root@master ~]# hostnamectl –static set-hostname master

Node1上执行:

[root@node1 ~]# hostnamectl –static set-hostname node1


Node2上执行:

[root@node1 ~]# hostnamectl –static set-hostname node2

在三台机器上设置hosts,均执行如下命令:

echo '192.168.200.202    master 
192.168.200.202 etcd
192.168.200.202 registry
192.168.200.203 node1
192.168.200.204 node2' >> /etc/host

1.3 关闭三台机器上的防火墙

systemctl disable firewalld.service 
systemctl stop firewalld.service

2、部署etcd

k8s运行依赖etcd,需要先部署etcd,本文采用yum方式安装:

[root@master ~]# yum install etcd -y

yum安装的etcd默认配置文件在/etc/etcd/etcd.conf。编辑配置文件,更改以下粗体斜体部分信息:

[root@master ~]# vim /etc/etcd/etcd.conf

# [member] 
ETCD_NAME=master
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
#ETCD_WAL_DIR=""
#ETCD_SNAPSHOT_COUNT="10000"
#ETCD_HEARTBEAT_INTERVAL="100"
#ETCD_ELECTION_TIMEOUT="1000" #ETCD_LISTEN_PEER_URLS="http://0.0.0.0:2380" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379,http://0.0.0.0:4001" #ETCD_MAX_SNAPSHOTS="5"
#ETCD_MAX_WALS="5"
#ETCD_CORS=""
#
#[cluster]
#ETCD_INITIAL_ADVERTISE_PEER_URLS="http://localhost:2380"
# if you use different ETCD_NAME (e.g. test), set ETCD_INITIAL_CLUSTER #value for this name, i.e. "test=http://..." #ETCD_INITIAL_CLUSTER="default=http://localhost:2380" #ETCD_INITIAL_CLUSTER_STATE="new"
#ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_ADVERTISE_CLIENT_URLS="http://etcd:2379,http://etcd:4001" #ETCD_DISCOVERY=""
#ETCD_DISCOVERY_SRV=""
#ETCD_DISCOVERY_FALLBACK="proxy"
#ETCD_DISCOVERY_PROXY=""
 启动并验证状态 
[root@master ~]# systemctl start etcd
[root@master ~]#  etcdctl set testdir/testkey0 0
[root@master ~]#  etcdctl get testdir/testkey0
 
[root@master ~]# etcdctl -C http://etcd:4001 cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://etcd:2379 cluster is healthy

[root@master ~]# etcdctl -C http://etcd:2379 cluster-health

member 8e9e05c52164694d is healthy: got healthy result from http://0.0.0.0:2379 cluster is healthy


扩展:Etcd集群部署参见——http://www.cnblogs.com/zhenyuyaodidiao/p/6237019.html

3、部署master

3.1 安装Docker

[root@master ~]# yum install docker -y

配置Docker配置文件,使其允许从registry中拉取镜像,添加粗体斜体部分。

[root@master ~]# vim /etc/sysconfig/docker

# /etc/sysconfig/docker 
# Modify these options if you want to change the way the docker daemon #runs
OPTIONS='--selinux-enabled --log-driver=journald --signature-verification=false'
if [ -z "${DOCKER_CERT_PATH}" ]; then
DOCKER_CERT_PATH=/etc/docker
fi
OPTIONS='--insecure-registry registry:5000'

设置开机自启动并开启服务

[root@master ~]# chkconfig docker on 

[root@master ~]# service docker start

3.2 安装kubernets

[root@master ~]# yum install kubernete

3.3 配置并启动kubernetes

  在kubernetes master上需要运行以下组件:
    Kubernets API Server
    Kubernets Controller Manager
    Kubernets Scheduler
相应的要更改以下几个配置中粗体斜体部分信息:

3.3.1 /etc/kubernetes/apiserver

[root@master ~]# vim /etc/kubernetes/apiserver
###
# kubernetes system config
#
# The following values are used to configure the kube-apiserver
#
# The address on the local server to listen to.
KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"
# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"
# Port minions listen on
# KUBELET_PORT="--kubelet-port=10250"
# Comma separated list of nodes in the etcd cluster KUBE_ETCD_SERVERS="--etcd-servers=http://etcd:2379"
# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
# default admission control policies
#KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""

3.3.2  /etc/kubernetes/config

 
[root@master ~]# vim /etc/kubernetes/config
###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver KUBE_MASTER="--master=http://master:8080

启动服务并设置开机自启动

[root@master ~]# systemctl enable kube-apiserver.service 
[root@master ~] # systemctl start kube-apiserver.service
[root@master ~] # systemctl enable kube-controller-manager.service
[root@master ~] # systemctl start kube-controller-manager.service
[root@master ~] # systemctl enable kube-scheduler.service
[root@master ~] # systemctl start kube-scheduler.servic

4、部署node

4.1 安装docker

  参见3.1

4.2 安装kubernets

  参见3.2

4.3 配置并启动kubernetes

  在kubernetes node上需要运行以下组件:

    Kubelet

    Kubernets Proxy

相应的要更改以下几个配置文中粗体斜体部分信息:

4.3.1 /etc/kubernetes/config

[root@node1 ~]# vim /etc/kubernetes/config 

###
# kubernetes system config
#
# The following values are used to configure various aspects of all
# kubernetes services, including
#
# kube-apiserver.service
# kube-controller-manager.service
# kube-scheduler.service
# kubelet.service
# kube-proxy.service
# logging to stderr means we get it in the systemd journal KUBE_LOGTOSTDERR="--logtostderr=true"

# journal message level, 0 is debug
KUBE_LOG_LEVEL="--v=0"

# Should this cluster be allowed to run privileged docker containers KUBE_ALLOW_PRIV="--allow-privileged=false"

# How the controller-manager, scheduler, and proxy find the apiserver
KUBE_MASTER="--master=http://master:8080"

4.3.2 /etc/kubernetes/kubelet

[root@node1 ~]# vim /etc/kubernetes/kubelet 

###
# kubernetes kubelet (minion) config

# The address for the info server to serve on (set to 0.0.0.0 or "" for #all interfaces)
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
# KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname-override=node1"

# location of the api-server
KUBELET_API_SERVER="--api-servers=http://master:8080"

# pod infrastructure container
KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=registry.access.redhat.com/rhel7/pod-infrastructure:latest"

# Add your own!
KUBELET_ARGS=""

启动服务并设置开机自启动

[root@master ~]# systemctl enable kubelet.service
[root@master ~]# systemctl start kubelet.service
[root@master ~]# systemctl enable kube-proxy.service
[root@master ~]# systemctl start kube-proxy.service

4.4 查看状态

在master上查看集群中节点及节点状态

[root@master ~]#  kubectl -s http://master:8080 get node 
NAME STATUS AGE
node1 Ready 3m
node2 Ready 16s

[root@master ~]# kubectl get nodes
NAME STATUS AGE
node1 Ready 3m
node2 Ready 43s

至此,已经搭建了一个kubernetes集群,但目前该集群还不能很好的工作,请继续后续的步骤。

5、创建覆盖网络——Flannel

5.1 安装Flannel

在master、node上均执行如下命令,进行安装

[root@master ~]# yum install flanne

5.2 配置Flannel

  master、node上均编辑/etc/sysconfig/flanneld,修改粗体斜体部分

[root@master ~]# vi /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location. Point this to the server where etcd runs FLANNEL_ETCD_ENDPOINTS="http://etcd:2379"

# etcd config key. This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/atomic.io/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

5.3 配置etcd中关于flannel的key

  Flannel使用Etcd进行配置,来保证多个Flannel实例之间的配置一致性,所以需要在etcd上进行如下配置:(‘/atomic.io/network/config’这个key与上文/etc/sysconfig/flannel中的配置项FLANNEL_ETCD_PREFIX是相对应的,错误的话启动就会出错)

[root@master ~]# etcdctl mk /atomic.io/network/config '{ "Network": "10.0.0.0/16" }' { "Network": "10.0.0.0/16" }

5.4 启动

  启动Flannel之后,需要依次重启docker、kubernete。

  在master执行:

systemctl enable flanneld.service  
systemctl start flanneld.service
service docker restart
systemctl restart kube-apiserver.service
systemctl restart kube-controller-manager.service
systemctl restart kube-scheduler.service

在node上执行:

systemctl enable flanneld.service 
systemctl start flanneld.service
service docker restart
systemctl restart kubelet.service
systemctl restart kube-proxy.service

发表评论

电子邮件地址不会被公开。 必填项已用*标注

3 × 5 =