摘要:前言达到了生产可用,利用部署一个高可用集群简单不少。部署高可用集群为创建负载均衡这里选择的。执行加入操作验证部署部署的创建为打标签标记子网以允许入口控制器自动发现用于的子网。
前言
kubeadm1.13达到了生产可用,利用kubeadm部署一个高可用集群简单不少。但是竟然部署在aws上,就要启用cloud-provider=aws,深度结合iaas层资源。主要是利用aws的elb和ebs等。相关的资料还是比较少的,已经有的一些文档要不是out了,要不就是内容不全,还有很多文章只是弄了一个demo的水平,完全没法上生产,部署过程破费周折。
组件版本和集群环境 集群组件和版本Kubernetes 1.13.1
Docker 18.06.0-ce
Etcd 3.2.24
Calico 3.4.0 网络
集群机器master:
172.31.22.208
172.31.17.44
172.31.22.135
node:
172.31.29.58
PSetcd集群非容器部署,systemd守护
三台master主机配置ssh免密登录
主机设置 关闭防火墙systemctl stop firewalld systemctl disable firewalld禁用selinux
# Set SELinux in permissive mode (effectively disabling it) setenforce 0 sed -i "s/^SELINUX=enforcing$/SELINUX=permissive/" /etc/selinux/config启用net.bridge.bridge-nf-call-ip6tables和net.bridge.bridge-nf-call-iptables
cat <禁用swap/etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 vm.swappiness=0 EOF sysctl --system
swapoff -a
修改/etc/fstab 文件,注释掉 SWAP 的自动挂载.
使用free -m确认swap已经关闭。
at > /etc/sysconfig/modules/ipvs.modules <上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块。
接下来还需要确保各个节点上已经安装了ipset软件包yum install ipset。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm。
赋予IAM权限Master Policy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "autoscaling:DescribeLaunchConfigurations", "autoscaling:DescribeTags", "ec2:DescribeInstances", "ec2:DescribeRegions", "ec2:DescribeRouteTables", "ec2:DescribeSecurityGroups", "ec2:DescribeSubnets", "ec2:DescribeVolumes", "ec2:CreateSecurityGroup", "ec2:CreateTags", "ec2:CreateVolume", "ec2:ModifyInstanceAttribute", "ec2:ModifyVolume", "ec2:AttachVolume", "ec2:AuthorizeSecurityGroupIngress", "ec2:CreateRoute", "ec2:DeleteRoute", "ec2:DeleteSecurityGroup", "ec2:DeleteVolume", "ec2:DetachVolume", "ec2:RevokeSecurityGroupIngress", "ec2:DescribeVpcs", "elasticloadbalancing:AddTags", "elasticloadbalancing:AttachLoadBalancerToSubnets", "elasticloadbalancing:ApplySecurityGroupsToLoadBalancer", "elasticloadbalancing:CreateLoadBalancer", "elasticloadbalancing:CreateLoadBalancerPolicy", "elasticloadbalancing:CreateLoadBalancerListeners", "elasticloadbalancing:ConfigureHealthCheck", "elasticloadbalancing:DeleteLoadBalancer", "elasticloadbalancing:DeleteLoadBalancerListeners", "elasticloadbalancing:DescribeLoadBalancers", "elasticloadbalancing:DescribeLoadBalancerAttributes", "elasticloadbalancing:DetachLoadBalancerFromSubnets", "elasticloadbalancing:DeregisterInstancesFromLoadBalancer", "elasticloadbalancing:ModifyLoadBalancerAttributes", "elasticloadbalancing:RegisterInstancesWithLoadBalancer", "elasticloadbalancing:SetLoadBalancerPoliciesForBackendServer", "elasticloadbalancing:AddTags", "elasticloadbalancing:CreateListener", "elasticloadbalancing:CreateTargetGroup", "elasticloadbalancing:DeleteListener", "elasticloadbalancing:DeleteTargetGroup", "elasticloadbalancing:DescribeListeners", "elasticloadbalancing:DescribeLoadBalancerPolicies", "elasticloadbalancing:DescribeTargetGroups", "elasticloadbalancing:DescribeTargetHealth", "elasticloadbalancing:ModifyListener", "elasticloadbalancing:ModifyTargetGroup", "elasticloadbalancing:RegisterTargets", "elasticloadbalancing:SetLoadBalancerPoliciesOfListener", "iam:CreateServiceLinkedRole", "kms:DescribeKey" ], "Resource": [ "*" ] }, ] }Node Policy
{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "ec2:DescribeInstances", "ec2:DescribeRegions", "ecr:GetAuthorizationToken", "ecr:BatchCheckLayerAvailability", "ecr:GetDownloadUrlForLayer", "ecr:GetRepositoryPolicy", "ecr:DescribeRepositories", "ecr:ListImages", "ecr:BatchGetImage", "sts:AssumeRole" ], "Resource": "*" } ] }tag标签需要为ec2实例, route table, subnet,安全组 打下面的标签:
kubernetes.io/cluster/="owned" cluster-name命名规范:
k8s-{region}-{env}-{num}
安装docker和kubeadm 安装指定版本docker 安装docker
例如:
k8s-usa-west-2-test-1yum install docker-18.06.1ce-5.amzn2 -y systemctl enable docker更改docker Root Dir 目录将/var/lib/dokcer 配置到/data/docker,确保/data是另外挂载的数据盘
更改 ‘/etc/sysconfig/docker’ 文件:
OPTIONS="--default-ulimit nofile=1024:4096 -g /data/docker"更改 /etc/docker/daemon.json:
cat > /etc/docker/daemon.json <验证
[root@ip-172-31-22-208 ~]# ls -lrt /var/lib/docker 总用量 0 [root@ip-172-31-22-208 ~]# ls -lrt /data/docker/ 总用量 0 drwx------ 3 root root 20 12月 11 10:44 containerd drwx------ 2 root root 6 12月 11 10:44 tmp drwx------ 2 root root 6 12月 11 10:44 runtimes drwx------ 4 root root 32 12月 11 10:44 plugins drwx------ 2 root root 6 12月 11 10:44 containers drwx------ 2 root root 25 12月 11 10:44 volumes drwx------ 3 root root 22 12月 11 10:44 image drwx------ 2 root root 6 12月 11 10:44 trust drwxr-x--- 3 root root 19 12月 11 10:44 network drwx------ 3 root root 40 12月 11 10:44 overlay2 drwx------ 2 root root 6 12月 11 10:44 swarm drwx------ 2 root root 24 12月 11 10:44 builder drwx------ 4 root root 92 12月 11 10:44 buildkit重启docker 服务
systemctl start docker验证docker:
root@ip-172-31-22-208 ~]# docker info Containers: 0 Running: 0 Paused: 0 Stopped: 0 Images: 0 Server Version: 18.06.1-ce Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: true Logging Driver: json-file Cgroup Driver: cgroupfs Plugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog Swarm: inactive Runtimes: runc Default Runtime: runc Init Binary: docker-init containerd version: 468a545b9edcd5932818eb9de8e72413e616e86e runc version: 69663f0bd4b60df09991c08812a60108003fa340 init version: fec3683 Security Options: seccomp Profile: default Kernel Version: 4.14.70-72.55.amzn2.x86_64 Operating System: Amazon Linux 2 OSType: linux Architecture: x86_64 CPUs: 4 Total Memory: 15.67GiB Name: ip-172-31-22-208.us-west-2.compute.internal ID: CG7S:P5XD:FLU6:MULI:2TSI:OLRY:A6EX:SM3D:FXNB:CMEQ:MU6R:XSCW Docker Root Dir: /data/docker Debug Mode (client): false Debug Mode (server): false Registry: https://index.docker.io/v1/ Labels: Experimental: false Insecure Registries: 127.0.0.0/8 Live Restore Enabled: false安装kubeadm等 增加k8s repocat <安装kubeadm, kubelet, kubectl/etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=0 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable kubelet && systemctl start kubelet验证kubeadm版本
[root@ip-172-31-22-208 ~]# kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"13", GitVersion:"v1.13.0", GitCommit:"ddf47ac13c1a9483ea035a79cd7c10005ff21a6d", GitTreeState:"clean", BuildDate:"2018-12-03T21:02:01Z", GoVersion:"go1.11.2", Compiler:"gc", Platform:"linux/amd64"}更新 kubelet config设置节点预留资源 ,同时为了支持 cloud-provider, 首先需要在 kubelet 的配置里做相应修改,为 /etc/sysconfig/kubelet 添加 KUBELET_EXTRA_ARGS:
KUBELET_EXTRA_ARGS=--cloud-provider=aws预留资源设置cgroups
mkdir -p /sys/fs/cgroup/cpu/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/cpuacct/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/cpuset/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/memory/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/devices/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/blkio/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/hugetlb/system.slice/kubelet.service mkdir -p /sys/fs/cgroup/systemd/system.slice/kubelet.service在/var/lib/kubelet/config.yaml中添加如下:
enforceNodeAllocatable: - pods - kube-reserved - system-reserved kubeReservedCgroup: /system.slice/kubelet.service systemReservedCgroup: /system.slice systemReserved: cpu: 500m memory: 1Gi ephemeral-storage: 5Gi kubeReserved: cpu: 500m memory: 1Gi ephemeral-storage: 5Gi部署高可用 etcd 集群kuberntes 系统使用 etcd 存储所有数据,本文档介绍部署一个三节点高可用 etcd 集群的步骤,这三个节点复用 kubernetes master 机器,分别命名为etcd-host0、etcd-host1、etcd-host2:
infra0: 172.31.22.208
infra1: 172.31.17.44
infra2: 172.31.22.135
使用的变量本文档用到的变量定义如下:
export NODE_NAME=infra0 # 当前部署的机器名称(随便定义,只要能区分不同机器即可) export NODE_IP=172.31.22.208 # 当前部署的机器 IP export NODE_IPS="172.31.22.208 172.31.17.44 172.31.22.135" # etcd 集群所有机器 IP # etcd 集群间通信的IP和端口 export ETCD_NODES=infra0=https://172.31.22.208:2380,infra1=https://172.31.17.44:2380,infra2=https://172.31.22.135:2380export NODE_NAME=infra1 # 当前部署的机器名称(随便定义,只要能区分不同机器即可) export NODE_IP=172.31.17.44 # 当前部署的机器 IP export NODE_IPS="172.31.22.208 172.31.17.44 172.31.22.135" # etcd 集群所有机器 IP # etcd 集群间通信的IP和端口 export ETCD_NODES=infra0=https://172.31.22.208:2380,infra1=https://172.31.17.44:2380,infra2=https://172.31.22.135:2380export NODE_NAME=infra2 # 当前部署的机器名称(随便定义,只要能区分不同机器即可) export NODE_IP=172.31.22.135 # 当前部署的机器 IP export NODE_IPS="172.31.22.208 172.31.17.44 172.31.22.135" # etcd 集群所有机器 IP # etcd 集群间通信的IP和端口 export ETCD_NODES=infra0=https://172.31.22.208:2380,infra1=https://172.31.17.44:2380,infra2=https://172.31.22.135:2380下载二进制文件到 https://github.com/coreos/etcd/releases 页面下载最新版本的二进制文件:
wget https://github.com/coreos/etcd/releases/download/v3.2.24/etcd-v3.2.24-linux-amd64.tar.gz tar -xvf etcd-v3.2.24-linux-amd64.tar.gz mv etcd-v3.2.24-linux-amd64/etcd* /usr/bin利用kubeadm创建秘钥和证书 为kubeadm创建配置文件使用以下脚本为每个将在其上运行etcd成员的主机生成一个kubeadm配置文件。
# Update HOST0, HOST1, and HOST2 with the IPs or resolvable names of your hosts export HOST0=172.31.22.208 export HOST1=172.31.17.44 export HOST2=172.31.22.135 # Create temp directories to store files that will end up on other hosts. mkdir -p /tmp/${HOST0}/ /tmp/${HOST1}/ /tmp/${HOST2}/ ETCDHOSTS=(${HOST0} ${HOST1} ${HOST2}) NAMES=("infra0" "infra1" "infra2") for i in "${!ETCDHOSTS[@]}"; do HOST=${ETCDHOSTS[$i]} NAME=${NAMES[$i]} cat << EOF > /tmp/${HOST}/kubeadmcfg.yaml apiVersion: "kubeadm.k8s.io/v1beta1" kind: ClusterConfiguration etcd: local: serverCertSANs: - "${HOST}" peerCertSANs: - "${HOST}" extraArgs: initial-cluster: infra0=https://${ETCDHOSTS[0]}:2380,infra1=https://${ETCDHOSTS[1]}:2380,infra2=https://${ETCDHOSTS[2]}:2380 initial-cluster-state: new name: ${NAME} listen-peer-urls: https://${HOST}:2380 listen-client-urls: https://${HOST}:2379 advertise-client-urls: https://${HOST}:2379 initial-advertise-peer-urls: https://${HOST}:2380 EOF done生成证书颁发机构执行如下命令:
kubeadm init phase certs etcd-ca生成下面两个文件:
/etc/kubernetes/pki/etcd/ca.crt
/etc/kubernetes/pki/etcd/ca.key
为每个成员创建证书kubeadm init phase certs etcd-server --config=/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config=/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST2}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST2}/kubeadmcfg.yaml cp -R /etc/kubernetes/pki /tmp/${HOST2}/ # cleanup non-reusable certificates find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete kubeadm init phase certs etcd-server --config=/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config=/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST1}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST1}/kubeadmcfg.yaml cp -R /etc/kubernetes/pki /tmp/${HOST1}/ find /etc/kubernetes/pki -not -name ca.crt -not -name ca.key -type f -delete kubeadm init phase certs etcd-server --config=/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs etcd-peer --config=/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs etcd-healthcheck-client --config=/tmp/${HOST0}/kubeadmcfg.yaml kubeadm init phase certs apiserver-etcd-client --config=/tmp/${HOST0}/kubeadmcfg.yaml # No need to move the certs because they are for HOST0 # clean up certs that should not be copied off this host find /tmp/${HOST2} -name ca.key -type f -delete find /tmp/${HOST1} -name ca.key -type f -delete拷贝证书到对应的主机上USER=root CONTROL_PLANE_IPS="172.31.17.44 172.31.22.135" for host in ${CONTROL_PLANE_IPS}; do scp -r /tmp/${host}/pki "${USER}"@$host: done$ 例如HOST0上所需文件的完整列表是:
/etc/kubernetes/pki ├── apiserver-etcd-client.crt ├── apiserver-etcd-client.key └── etcd ├── ca.crt ├── ca.key ├── healthcheck-client.crt ├── healthcheck-client.key ├── peer.crt ├── peer.key ├── server.crt └── server.key其他两台主机如上。
创建 etcd 的 systemd unit 文件mkdir -p /var/lib/etcd # 必须先创建工作目录 cat > etcd.service <指定 etcd 的工作目录和数据目录为 /var/lib/etcd,需在启动服务前创建这个目录;
为了保证通信安全,需要指定 etcd 的公私钥(cert-file和key-file)、Peers 通信的公私钥和 CA 证书(peer-cert-file、peer-key-file、peer-trusted-ca-file)、客户端的CA证书(trusted-ca-file);
--initial-cluster-state 值为 new 时,--name 的参数值必须位于 --initial-cluster 列表中;
启动 etcd 服务mv etcd.service /etc/systemd/system/ systemctl daemon-reload systemctl enable etcd systemctl start etcd systemctl status etcd $最先启动的 etcd 进程会卡住一段时间,等待其它节点上的 etcd 进程加入集群,为正常现象。
在所有的 etcd 节点重复上面的步骤,直到所有机器的 etcd 服务都已启动。
验证服务部署完 etcd 集群后,在任一 etcd 集群节点上执行如下命令:
for ip in ${NODE_IPS}; do ETCDCTL_API=3 /usr/bin/etcdctl --endpoints=https://${ip}:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/server.crt --key=/etc/kubernetes/pki/etcd/server.key endpoint health; done预期结果:
https://172.31.22.208:2379 is healthy: successfully committed proposal: took = 1.543275ms https://172.31.17.44:2379 is healthy: successfully committed proposal: took = 1.883033ms https://172.31.22.135:2379 is healthy: successfully committed proposal: took = 2.026367ms三台 etcd 的输出均为 healthy 时表示集群服务正常(忽略 warning 信息)。
部署高可用 master集群 为kube-apiserver创建tcp负载均衡这里选择aws的nlb。具体创建过程不再叙述。
添加到变量
创建结果nlb-sgt-k8sapiserver-test-4748f2f556591bb7.elb.us-west-2.amazonaws.com。export LOAD_BALANCER_DNS=nlb-sgt-k8sapiserver-test-4748f2f556591bb7.elb.us-west-2.amazonaws.com export ETCD_0_IP=172.31.22.208 export ETCD_1_IP=172.31.17.44 export ETCD_2_IP=172.31.22.135创建 启用aws cloud-providercat > kubeadm-config.yaml <创建 不启用aws cloud-provider cat > kubeadm-config.yaml <创建第一个master 执行
kubeadm init --config=kubeadm-config.yaml出现以下:
设置访问证书:
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config chown $(id -u):$(id -g) $HOME/.kube/config创建剩余master复制证书
USER=root # customizable CONTROL_PLANE_IPS="172.31.17.44 172.31.22.135" for host in ${CONTROL_PLANE_IPS}; do scp /etc/kubernetes/pki/ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/ca.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.key "${USER}"@$host: scp /etc/kubernetes/pki/sa.pub "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.crt "${USER}"@$host: scp /etc/kubernetes/pki/front-proxy-ca.key "${USER}"@$host: scp /etc/kubernetes/admin.conf "${USER}"@$host: done在剩余主机执行:
USER=root # customizable mv /${USER}/ca.crt /etc/kubernetes/pki/ mv /${USER}/ca.key /etc/kubernetes/pki/ mv /${USER}/sa.pub /etc/kubernetes/pki/ mv /${USER}/sa.key /etc/kubernetes/pki/ mv /${USER}/front-proxy-ca.crt /etc/kubernetes/pki/ mv /${USER}/front-proxy-ca.key /etc/kubernetes/pki/ mv /${USER}/admin.conf /etc/kubernetes/admin.conf加入控制节点:
kubeadm join nlb-sgt-k8sapiserver-test-4748f2f556591bb7.elb.us-west-2.amazonaws.com:6443 --token u9hmb3.gwfozvsz90k3yt9g --discovery-token-ca-cert-hash sha256:24c354cce46de9c1eb1a8358b9ba064166e87cf6c011fecaae3350c3910c215a --experimental-control-plane忘记discovery-token-ca-cert-hash?
openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed "s/^.* //"部署calico网络 检查aws ec2是否关闭了src/dst checks?
配置calicoctl 下载calicoctlcurl -O -L https://github.com/projectcalico/calicoctl/releases/download/v3.4.0/calicoctl chmod +x calicoctl mv calicoctl /usr/bin/配置calico config 文件cat > /etc/calico/calicoctl.cfg <使用到的变量 export ETCD_KEY=$(cat /etc/kubernetes/pki/etcd/server.key | base64 | tr -d " ") export ETCD_CERT=$(cat /etc/kubernetes/pki/etcd/server.crt | base64 | tr -d " ") export ETCD_CA=$(cat /etc/kubernetes/pki/etcd/ca.crt | base64 | tr -d " ")创建calico.ymlcat > calico.yml <部署calico| base64 -w 0 etcd-key: ${ETCD_KEY} etcd-cert: ${ETCD_CERT} etcd-ca: ${ETCD_CA} --- # This manifest installs the calico/node container, as well # as the Calico CNI plugins and network config on # each master and worker node in a Kubernetes cluster. kind: DaemonSet apiVersion: extensions/v1beta1 metadata: name: calico-node namespace: kube-system labels: k8s-app: calico-node spec: selector: matchLabels: k8s-app: calico-node updateStrategy: type: RollingUpdate rollingUpdate: maxUnavailable: 1 template: metadata: labels: k8s-app: calico-node annotations: # This, along with the CriticalAddonsOnly toleration below, # marks the pod as a critical add-on, ensuring it gets # priority scheduling and that its resources are reserved # if it ever gets evicted. scheduler.alpha.kubernetes.io/critical-pod: "" spec: nodeSelector: beta.kubernetes.io/os: linux hostNetwork: true tolerations: # Make sure calico-node gets scheduled on all nodes. - effect: NoSchedule operator: Exists # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - effect: NoExecute operator: Exists serviceAccountName: calico-node # Minimize downtime during a rolling upgrade or deletion; tell Kubernetes to do a "force # deletion": https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods. terminationGracePeriodSeconds: 0 initContainers: # This container installs the Calico CNI binaries # and CNI network config file on each node. - name: install-cni image: quay.io/calico/cni:v3.4.0 command: ["/install-cni.sh"] env: # Name of the CNI config file to create. - name: CNI_CONF_NAME value: "10-calico.conflist" # The CNI network config to install on each node. - name: CNI_NETWORK_CONFIG valueFrom: configMapKeyRef: name: calico-config key: cni_network_config # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # CNI MTU Config variable - name: CNI_MTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # Prevents the container from sleeping forever. - name: SLEEP value: "false" volumeMounts: - mountPath: /host/opt/cni/bin name: cni-bin-dir - mountPath: /host/etc/cni/net.d name: cni-net-dir - mountPath: /calico-secrets name: etcd-certs containers: # Runs calico/node container on each Kubernetes node. This # container programs network policy and routes on each # host. - name: calico-node image: quay.io/calico/node:v3.4.0 env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # Location of the CA certificate for etcd. - name: ETCD_CA_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_ca # Location of the client key for etcd. - name: ETCD_KEY_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_key # Location of the client certificate for etcd. - name: ETCD_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_cert # Set noderef for node controller. - name: CALICO_K8S_NODE_REF valueFrom: fieldRef: fieldPath: spec.nodeName # Choose the backend to use. - name: CALICO_NETWORKING_BACKEND valueFrom: configMapKeyRef: name: calico-config key: calico_backend # Cluster type to identify the deployment type - name: CLUSTER_TYPE value: "k8s,bgp" # Auto-detect the BGP IP address. - name: IP value: "autodetect" # Enable IPIP - name: CALICO_IPV4POOL_IPIP value: "Always" # Set MTU for tunnel device used if ipip is enabled - name: FELIX_IPINIPMTU valueFrom: configMapKeyRef: name: calico-config key: veth_mtu # The default IPv4 pool to create on startup if none exists. Pod IPs will be # chosen from this range. Changing this value after installation will have # no effect. This should fall within --cluster-cidr. - name: CALICO_IPV4POOL_CIDR value: "192.168.0.0/16" # Disable file logging so `kubectl logs` works. - name: CALICO_DISABLE_FILE_LOGGING value: "true" # Set Felix endpoint to host default action to ACCEPT. - name: FELIX_DEFAULTENDPOINTTOHOSTACTION value: "ACCEPT" # Disable IPv6 on Kubernetes. - name: FELIX_IPV6SUPPORT value: "false" # Set Felix logging to "info" - name: FELIX_LOGSEVERITYSCREEN value: "info" - name: FELIX_HEALTHENABLED value: "true" securityContext: privileged: true resources: requests: cpu: 250m livenessProbe: httpGet: path: /liveness port: 9099 host: localhost periodSeconds: 10 initialDelaySeconds: 10 failureThreshold: 6 readinessProbe: exec: command: - /bin/calico-node - -bird-ready - -felix-ready periodSeconds: 10 volumeMounts: - mountPath: /lib/modules name: lib-modules readOnly: true - mountPath: /run/xtables.lock name: xtables-lock readOnly: false - mountPath: /var/run/calico name: var-run-calico readOnly: false - mountPath: /var/lib/calico name: var-lib-calico readOnly: false - mountPath: /calico-secrets name: etcd-certs volumes: # Used by calico/node. - name: lib-modules hostPath: path: /lib/modules - name: var-run-calico hostPath: path: /var/run/calico - name: var-lib-calico hostPath: path: /var/lib/calico - name: xtables-lock hostPath: path: /run/xtables.lock type: FileOrCreate # Used to install CNI. - name: cni-bin-dir hostPath: path: /opt/cni/bin - name: cni-net-dir hostPath: path: /etc/cni/net.d # Mount in the etcd TLS secrets with mode 400. # See https://kubernetes.io/docs/concepts/configuration/secret/ - name: etcd-certs secret: secretName: calico-etcd-secrets defaultMode: 0400 --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-node namespace: kube-system --- # This manifest deploys the Calico Kubernetes controllers. # See https://github.com/projectcalico/kube-controllers apiVersion: extensions/v1beta1 kind: Deployment metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers annotations: scheduler.alpha.kubernetes.io/critical-pod: "" spec: # The controllers can only have a single active instance. replicas: 1 strategy: type: Recreate template: metadata: name: calico-kube-controllers namespace: kube-system labels: k8s-app: calico-kube-controllers spec: nodeSelector: beta.kubernetes.io/os: linux # The controllers must run in the host network namespace so that # it isn"t governed by policy that would prevent it from working. hostNetwork: true tolerations: # Mark the pod as a critical add-on for rescheduling. - key: CriticalAddonsOnly operator: Exists - key: node-role.kubernetes.io/master effect: NoSchedule serviceAccountName: calico-kube-controllers containers: - name: calico-kube-controllers image: quay.io/calico/kube-controllers:v3.4.0 env: # The location of the Calico etcd cluster. - name: ETCD_ENDPOINTS valueFrom: configMapKeyRef: name: calico-config key: etcd_endpoints # Location of the CA certificate for etcd. - name: ETCD_CA_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_ca # Location of the client key for etcd. - name: ETCD_KEY_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_key # Location of the client certificate for etcd. - name: ETCD_CERT_FILE valueFrom: configMapKeyRef: name: calico-config key: etcd_cert # Choose which controllers to run. - name: ENABLED_CONTROLLERS value: policy,namespace,serviceaccount,workloadendpoint,node volumeMounts: # Mount in the etcd TLS secrets. - mountPath: /calico-secrets name: etcd-certs readinessProbe: exec: command: - /usr/bin/check-status - -r volumes: # Mount in the etcd TLS secrets with mode 400. # See https://kubernetes.io/docs/concepts/configuration/secret/ - name: etcd-certs secret: secretName: calico-etcd-secrets defaultMode: 0400 --- apiVersion: v1 kind: ServiceAccount metadata: name: calico-kube-controllers namespace: kube-system --- # Include a clusterrole for the kube-controllers component, # and bind it to the calico-kube-controllers serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: calico-kube-controllers rules: # Pods are monitored for changing labels. # The node controller monitors Kubernetes nodes. # Namespace and serviceaccount labels are used for policy. - apiGroups: - "" resources: - pods - nodes - namespaces - serviceaccounts verbs: - watch - list # Watch for changes to Kubernetes NetworkPolicies. - apiGroups: - networking.k8s.io resources: - networkpolicies verbs: - watch - list --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: calico-kube-controllers roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-kube-controllers subjects: - kind: ServiceAccount name: calico-kube-controllers namespace: kube-system --- # Include a clusterrole for the calico-node DaemonSet, # and bind it to the calico-node serviceaccount. kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: calico-node rules: # The CNI plugin needs to get pods, nodes, and namespaces. - apiGroups: [""] resources: - pods - nodes - namespaces verbs: - get - apiGroups: [""] resources: - endpoints - services verbs: # Used to discover service IPs for advertisement. - watch - list - apiGroups: [""] resources: - nodes/status verbs: # Needed for clearing NodeNetworkUnavailable flag. - patch --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: calico-node roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: calico-node subjects: - kind: ServiceAccount name: calico-node namespace: kube-system --- EOF kubectl apply -f calico.yml设置ippool执行:
calicoctl apply -f - << EOF apiVersion: projectcalico.org/v3 kind: IPPool metadata: name: default-ipv4-ippool spec: cidr: 192.168.0.0/16 ipipMode: CrossSubnet natOutgoing: true EOF部署node节点执行主机设置的所有项。
执行加入操作:
kubeadm join nlb-sgt-k8sapiserver-test-4748f2f556591bb7.elb.us-west-2.amazonaws.com:6443 --token u9hmb3.gwfozvsz90k3yt9g --discovery-token-ca-cert-hash sha256:24c354cce46de9c1eb1a8358b9ba064166e87cf6c011fecaae3350c3910c215a验证:
kubectl get nodes NAME STATUS ROLES AGE VERSION ip-172-31-17-44.us-west-2.compute.internal Ready master 4m2s v1.13.0 ip-172-31-22-135.us-west-2.compute.internal Ready master 3m59s v1.13.0 ip-172-31-22-208.us-west-2.compute.internal Ready master 16h v1.13.0 ip-172-31-29-58.us-west-2.compute.internal Ready部署addon 部署aws的sts14h v1.13.0 kubectl apply -f https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/storage-class/aws/default.yaml创建alb-ingress-controller 为subnet打标签标记AWS子网以允许入口控制器自动发现用于ALB的子网。
kubernetes.io/cluster/${cluster-name} must be set to owned or shared
kubernetes.io/role/internal-elb must be set to 1 or `` for internal LoadBalancers
kubernetes.io/role/elb must be set to 1 or `` for internet-facing LoadBalancers
rbackubectl apply -f https://raw.githubusercontent.com/kubernetes-sigs/aws-alb-ingress-controller/v1.0.1/docs/examples/rbac-role.yaml按照如下yaml创建# Application Load Balancer (ALB) Ingress Controller Deployment Manifest. # This manifest details sensible defaults for deploying an ALB Ingress Controller. # GitHub: https://github.com/kubernetes-sigs/aws-alb-ingress-controller apiVersion: apps/v1 kind: Deployment metadata: labels: app: alb-ingress-controller name: alb-ingress-controller # Namespace the ALB Ingress Controller should run in. Does not impact which # namespaces it"s able to resolve ingress resource for. For limiting ingress # namespace scope, see --watch-namespace. namespace: kube-system annotations: scheduler.alpha.kubernetes.io/critical-pod: "" spec: replicas: 1 selector: matchLabels: app: alb-ingress-controller strategy: rollingUpdate: maxSurge: 1 maxUnavailable: 1 type: RollingUpdate template: metadata: annotations: iam.amazonaws.com/role: arn:aws:iam::1234567:role/Role-KubernetesIngressController-test labels: app: alb-ingress-controller spec: containers: - args: # Limit the namespace where this ALB Ingress Controller deployment will # resolve ingress resources. If left commented, all namespaces are used. # - --watch-namespace=your-k8s-namespace # Setting the ingress-class flag below ensures that only ingress resources with the # annotation kubernetes.io/ingress.class: "alb" are respected by the controller. You may # choose any class you"d like for this controller to respect. - --ingress-class=alb # Name of your cluster. Used when naming resources created # by the ALB Ingress Controller, providing distinction between # clusters. - --cluster-name=k8s-us-west-test-1 # AWS VPC ID this ingress controller will use to create AWS resources. # If unspecified, it will be discovered from ec2metadata. # - --aws-vpc-id=vpc-xxxxxx # AWS region this ingress controller will operate in. # If unspecified, it will be discovered from ec2metadata. # List of regions: http://docs.aws.amazon.com/general/latest/gr/rande.html#vpc_region # - --aws-region=us-west-1 # Enables logging on all outbound requests sent to the AWS API. # If logging is desired, set to true. # - ---aws-api-debug # Maximum number of times to retry the aws calls. # defaults to 10. # - --aws-max-retries=10 env: # AWS key id for authenticating with the AWS API. # This is only here for examples. It"s recommended you instead use # a project like kube2iam for granting access. #- name: AWS_ACCESS_KEY_ID # value: KEYVALUE # AWS key secret for authenticating with the AWS API. # This is only here for examples. It"s recommended you instead use # a project like kube2iam for granting access. #- name: AWS_SECRET_ACCESS_KEY # value: SECRETVALUE # Repository location of the ALB Ingress Controller. image: 894847497797.dkr.ecr.us-west-2.amazonaws.com/aws-alb-ingress-controller:v1.0.1 imagePullPolicy: Always name: server resources: {} terminationMessagePath: /dev/termination-log dnsPolicy: ClusterFirst restartPolicy: Always securityContext: {} terminationGracePeriodSeconds: 30 serviceAccountName: alb-ingress serviceAccount: alb-ingress注意cluster-name 指定集群name。
创建dashbordkubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml需要创建一个admin用户并授予admin角色绑定,使用下面的yaml文件创建admin用户并赋予他管理员权限,然后可以通过token登陆dashbaord,该文件见admin-role.yaml
kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1beta1 metadata: name: admin annotations: rbac.authorization.kubernetes.io/autoupdate: "true" roleRef: kind: ClusterRole name: cluster-admin apiGroup: rbac.authorization.k8s.io subjects: - kind: ServiceAccount name: admin namespace: kube-system --- apiVersion: v1 kind: ServiceAccount metadata: name: admin namespace: kube-system labels: kubernetes.io/cluster-service: "true" addonmanager.kubernetes.io/mode: Reconcile获取token
kubectl -n kube-system get secret|grep admin-token admin-token-cs4gs kubernetes.io/service-account-token 3 10m kubectl describe secret admin-token-cs4gs -n kube-system重新部署操作kubeadm reset iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X ipvsadm --clear ifconfig tunl0 down ip link delete tunl0升级kubeadm等升级kubeadm
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version export ARCH=amd64 # or: arm, arm64, ppc64le, s390x curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm chmod a+rx /usr/bin/kubeadm升级kubectl
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version export ARCH=amd64 # or: arm, arm64, ppc64le, s390x curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubectl > /usr/bin/kubectl chmod a+rx /usr/bin/kubectl升级kubelet
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version export ARCH=amd64 # or: arm, arm64, ppc64le, s390x curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubelet > /usr/bin/kubelet chmod a+rx /usr/bin/kubelet
文章版权归作者所有,未经允许请勿转载,若此文章存在违规行为,您可以联系管理员删除。
转载请注明本文地址:https://www.ucloud.cn/yun/33106.html
摘要:前言达到了生产可用,利用部署一个高可用集群简单不少。部署高可用集群为创建负载均衡这里选择的。执行加入操作验证部署部署的创建为打标签标记子网以允许入口控制器自动发现用于的子网。 前言 kubeadm1.13达到了生产可用,利用kubeadm部署一个高可用集群简单不少。但是竟然部署在aws上,就要启用cloud-provider=aws,深度结合iaas层资源。主要是利用aws的elb和e...
摘要:如何在启用的集群中设置的为关于的和两个值,在之前的文章中,我们已经讲过。首先保证启动参数里加入设置然后需要设置的为。参考资料当您创建时,我们会自动创建选项集,并将它们与相关联。 如何在启用cloud-provider=aws的k8s集群中设置service 的externalTrafficPolicy为local 关于externalTrafficPolicy的local和cluste...
摘要:如何在启用的集群中设置的为关于的和两个值,在之前的文章中,我们已经讲过。首先保证启动参数里加入设置然后需要设置的为。参考资料当您创建时,我们会自动创建选项集,并将它们与相关联。 如何在启用cloud-provider=aws的k8s集群中设置service 的externalTrafficPolicy为local 关于externalTrafficPolicy的local和cluste...
摘要:本教程适用范围本教程适用范围在上使用服务部署,并通过访问集群计算节点采用托管,并使用启动模板。到此,完成集群的搭建,部署,部署,并实现了外网访问。本教程适用范围在AWS上使用EKS服务部署k8s Dashboard,并通过ALB访问EKS集群计算节点采用托管EC2,并使用启动模板。使用AWS海外账号,us-west-2区域使用账号默认vpc(172.31.0.0/16)和子网使用awscli...
摘要:集群概述整个集群包括大部分集群节点节点集群主要作为集群和网络的数据存储。集群组件版本集群机器主从从后续计划用替换。 前言 k8s部署的方式多种多样,除去各家云厂商提供的工具,在bare metal中,也有二进制部署和一系列的自动化部署工具(kubeadm,kubespary,rke等)。具体二进制部署大家可以参考宋总的系列文章。而rke是由rancher提供的工具,由于刚刚出来,有不少...
阅读 2909·2021-11-15 18:02
阅读 3803·2021-10-14 09:43
阅读 3736·2021-09-08 10:41
阅读 2524·2019-08-30 15:53
阅读 1806·2019-08-30 14:14
阅读 1946·2019-08-29 16:12
阅读 3143·2019-08-29 14:03
阅读 1282·2019-08-29 13:46