前言 前面,我们已经对比了目前流行的微服务框架。最终决定跳进了Kubernetes的坑。 那么,在今后的日子里,我们就慢慢填坑吧 因为仅仅是作为试水,所以本文就是单机部署为目的。顺便能跑起来常见示例
参考以下文档食用
准备服务器 AWS(日本节点)运维安装的AWS基于(RedHat)魔改的系统,对于我这个非运维人员来说,没有CentOS舒服。 如果是基于虚拟机部署,请保证CPU数量大于1。并且关闭了 swap
基础环境 1 2 3 4 sudo su yum update -y cd mkdir kubernetes
安装Docker 1 2 3 4 5 6 7 8 9 10 11 12 13 14 # 常规安装方式 yum install -y yum-utils yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo yum install -y docker-ce # # # yum install --setopt=obsoletes=0 \ docker-ce-17.03.2.ce-1.el7.centos.x86_64 \ docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch -y # 开机启动 && 启动服务 systemctl enable docker && systemctl start docker
设置kubernetes源 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 # 设置k8s源 # cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg EOF # cat <<EOF > /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=0 repo_gpgcheck=0 gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg EOF
安装kubernetes 这里就凸显出了国外服务器的好处了
1 2 3 4 5 6 7 8 9 10 yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes systemctl enable kubelet && systemctl start kubelet # for i in `kubeadm config images list`; do imageName=${i#k8s.gcr.io/} docker pull registry.aliyuncs.com/google_containers/$imageName docker tag registry.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName docker rmi registry.aliyuncs.com/google_containers/$imageName done;
初始化 1 2 3 # # kubeadm init --image-repository registry.cn-hangzhou.aliyuncs.com/google_containers --kubernetes-version v1.14.1 --pod-network-cidr=10.244.0.0/16
修改端口号 可能在下一步碰到出错:
1 2 3 4 The connection to the server localhost:8080 was refused - did you specify the right host or port? # vim /etc/kubernetes/manifests/kube-apiserver.yaml 将 --insecure-port=0 改为 --insecure-port=8080
关闭SWAP分区 基于虚拟机自安装系统的话,需要关心这个问题。基于云服务的话,可以跳过这个步骤
1 2 3 4 5 6 7 8 “[ERROR Swap]:running with swap on is not supported. Please disable swap” # # swapoff -a vim /etc/fstab # # #
开启单机模式 这个是在做单机测试的时候使用。集群环境下不用
1 kubectl taint nodes --all node-role.kubernetes.io/master-
安装网络插件 这边有很多选择,本次使用weave
1 2 # kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"
相关操作命令 1 2 3 4 5 6 7 kubectl get namespaces kubectl create namespace xxx kubectl delete namespaces xxx kubectl get pods --all-namespaces kubectl get pods --show-labels kubectl get nodes kubectl get rs
玩坏了,删除重来(还原至 k8s 最干净状态)
安装 Kubernetes Dashboard(官方) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # 应用配置 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.0.3/aio/deploy/recommended.yaml # cat <<EOF > kubernetes/dashboard-adminuser.yaml apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kubernetes-dashboard EOF # 应用配置 kubectl apply -f kubernetes/dashboard-adminuser.yaml # 获取登陆token kubectl -n kubernetes-dashboard describe secret $(kubectl -n kubernetes-dashboard get secret | grep admin-user | awk '{print $1}') # kubectl proxy # 然后访问:http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/
安装 Kuboard(另外一个控制面板) 该面板的优点就是可以通过公网访问,缺点就是安全性。本文档中的操作均是在此面板下进行的
1 2 3 4 5 6 7 # 应用配置 kubectl apply -f https://kuboard.cn/install-script/kuboard.yaml # 获取登陆token echo $(kubectl -n kube-system get secret $(kubectl -n kube-system get secret | grep kuboard-user | awk '{print $1}') -o go-template='{{.data.token}}' | base64 -d) # http://服务器IP:32567/
测试安装Tomcat 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 # 下载tag为tomcat的images(默认版本为lasted) docker pull tomcat cd && mkdir kubernetes # 创建配置文件 cat <<EOF > kubernetes/tomcat.yaml apiVersion: v1 kind: ReplicationController metadata: name: tomcat-demo spec: replicas: 1 selector: app: tomcat-demo template: metadata: labels: app: tomcat-demo spec: containers: - name: tomcat-demo image: tomcat ports: - containerPort: 8080 EOF
说明:
replicas: 1 pod实例个数为1image: tomcat docker镜像name: tomcat-demo Replication Controller 名称spec:template: 当运行实例个数小于replicas时候,rc会根据 spec:template: 自动生成对应个数pod
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # 运行配置,用来创建 pod kubectl create -f kubernetes/tomcat.yaml # kubectl delete rc tomcat-demo # # 配置服务文件 cat <<EOF > kubernetes/tomcat-svc.yaml apiVersion: v1 kind: Service metadata: name: tomcat-demo spec: type: NodePort ports: - port: 8080 nodePort: 30001 selector: app: tomcat-demo EOF
说明:
nodePort: 30001 将docker的8080 映射到物理服务器的 30001name: tomcat-demo 服务名
1 2 3 4 5 6 7 8 9 10 11 # 运行配置,用来创建 service $ kubectl create -f kubernetes/tomcat-svc.yaml # 结果如下 service/tomcat-demo created # 修复访问404的问题 docker ps |grep tomcat # 获取容器ID docker exec -it 9160162022f1 /bin/bash cp -r webapps.dist/* webapps/ exit # docker restart 9160162022f1 可以不用执行这个操作,如果刷新后没有看到效果,执行 restart
测试Nginx 1 2 3 4 5 6 # 在这里,我们换一种方式来部署项目 # http://blog.leanote.com/post/gua_l/fdd68bde8685 kubectl create deployment nginx --image=nginx kubectl create service nodeport nginx --tcp 80:80 kubectl delete service nginx # 删除刚才创建的 service kubectl delete deployment nginx # 删除上面创建的 deployment
一个简单示例实现负载均衡 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 # 安装golang编译环境 yum install -y golang # 创建 main.go 文件 cat <<EOF > main.go package main import ( "fmt" "math/rand" "net/http" "time" ) func randomServerId() string { rand.Seed(time.Now().UnixNano()) str := "0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz" l := 10 bytes := make([]byte, l) for i := 0; i < l; i++ { bytes[i] = str[rand.Intn(len(str))] } return string(bytes) } func main() { serverId := randomServerId() http.HandleFunc("/", func(w http.ResponseWriter, r *http.Request) { fmt.Fprintln(w, fmt.Sprintf("hello world from [%s]", serverId)) }) http.ListenAndServe(":8000", nil) } EOF # 编译项目 # GOARCH=amd64 GOOS=linux go build -o app main.go rm -f app GOARCH=amd64 GOOS=linux /usr/lib/golang/bin/go build -o app main.go # 创建docker打包脚本(因为仅为示例,所以尽可能简单来写了) cat <<EOF > Dockerfile FROM library/alpine:3.12 RUN apk add bash RUN apk add libc6-compat copy app /app ENTRYPOINT chmod +x /app && ./app EXPOSE 8000 EOF # 编译docker镜像 docker rmi go-welcome:latest docker rmi go-welcome:v1 docker build -t go-welcome:v1 -t go-welcome:latest . # 查看刚刚编译好的Docker镜像 docker images |grep go-welcome # 运行测试(确保编译后正常可用) docker run --rm --name test-welcome -p 30003:8000 go-welcome # 部署至k8s kubectl delete service dev-welcome kubectl delete deployment dev-welcome kubectl create deployment dev-welcome --image=go-welcome -o yaml --dry-run # apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: dev-welcome name: dev-welcome spec: replicas: 1 selector: matchLabels: app: dev-welcome strategy: {} template: metadata: creationTimestamp: null labels: app: dev-welcome spec: containers: - image: go-welcome name: go-welcome resources: {} # 在下面添加 imagePullPolicy: Never status: {} # cat <<EOF > kubernetes/dev-welcome.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: dev-welcome name: dev-welcome spec: replicas: 2 selector: matchLabels: app: dev-welcome strategy: {} template: metadata: creationTimestamp: null labels: app: dev-welcome spec: containers: - image: go-welcome name: go-welcome resources: {} imagePullPolicy: Never status: {} EOF kubectl apply -f kubernetes/dev-welcome.yaml kubectl create service nodeport dev-welcome --tcp 8000:8000 # http://ip:端口/proxy/http/default/dev-welcome/:/8000/ # 访问后,多刷新几次,就会看到从不同服务器的响应的(serverId不同)
Kubernetes默认机制:
如果把 k8s 对 pod 的crash 状态判断也能称之为“健康检查”的话,那算是默认的健康检查机制了, 每个容器启动时都会执行一个主进程,如果进程退出返回码不是0,则认为容器异常,即pod异常,k8s 会根据restartPolicy策略选择是否杀掉 pod,再重新启动一个。
restartPolicy分为三种:
Always:当容器终止退出后,总是重启容器,默认策略。
Onfailure:当容器异常退出(退出码非0)时,才重启容器。
Never:当容器终止退出时,才不重启容器。
如果需要针对特定接口进行健康检测则需要配置HTTP探针
HTTP探针示例 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 cat <<EOF > kubernetes/dev-welcome.yaml apiVersion: apps/v1 kind: Deployment metadata: creationTimestamp: null labels: app: dev-welcome name: dev-welcome spec: replicas: 2 selector: matchLabels: app: dev-welcome strategy: {} template: metadata: creationTimestamp: null labels: app: dev-welcome spec: containers: - image: go-welcome name: go-welcome resources: {} imagePullPolicy: Never livenessProbe: httpGet: path: /healthz port: 8080 httpHeaders: - name: Custom-Header value: Awesome initialDelaySeconds: 3 periodSeconds: 3 status: {} EOF
简单错误排查 1 2 3 kubectl get pods # 获取已有的pod kubectl describe pod/需要查询的podName # 返回pod运行详情 kubectl logs 需要查询的podName
预留坑位
国内镜像拉取问题 其实这个问题早就有解决方案了,但是,既然有国外主机那就不费这个劲了