Fork me on GitHub
Fork me on GitHub

Kubernetes Ingress及Ingress Controller

Ingress Controller

在Kubernetes集群中,后端被代理的资源,是不配置https的,是纯明文的http,但是我们使用一个独特的调度器或者叫Pod,对这个Pod来讲,它是一个运行在用户空间的应用程序,比如Nginx、traefik、haproxy、Envoy等。当用户试图访问某一服务时,我们不让这些请求先到达这些后端Pod的service,而是先到达这个Pod。这个Pod和service后的Pod是同一个网段的,所以可以让这个Pod和后端Pod直接通信,不要经过service,来完成反向代理。问题是这个Pod怎么接入外部流量呢?它需要NodePort类型的service。
客户端通过NodePort类型的service调度以后,与我们专门配置了https的Pod进行通信。但是等请求到达此Pod以后,由此Pod反向代理至后端Pod。所以这个Pod是https会话卸载器了。
访问任何一个节点的NodePort都能到达那个反向代理Pod,每一个节点都拥有Pod所在网络的地址,可以跟Pod直接通信。
但是这种模式调度次数太多了,集群外负载均衡调度到Node,Node调度到service,service调度到反向代理Pod,反向代理Pod调度到后端Pod。性能太差了。

可以让实现反向代理的Pod直接共享节点的网络名称空间,也就是Pod监听着宿主机的地址。这样在集群外做个4层负载均衡,这个四层调度性能损失较少,然后到集群Pod了。

但是这样一来,这个Pod在每个node上只能运行1个了。一般来讲,这样的Pod,集群一般只运行1个,这个Pod运行在集群某个节点上就可以了。但是让这个Pod监听在节点的端口上,会有一个问题。使用Service时,用户访问任何一个节点的NodePort都行,但是目前这个Pod只运行1个,只监听在一个节点上。万一这个节点挂了呢?
使用DaemonSet,在每个节点上都运行,而且只运行1个这样的Pod。这样调度到任何一个节点都可以了。但是假如集群有非常多的节点,比如1000个节点,没必要在每个节点上都运行这样的Pod,DaemonSet可以实现在部分节点上运行。1000个节点拿出3个节点,打上污点,让别的Pod都调度不上来,我们定义这个DaemonSet控制下的Pod只允许运行在这3个节点上,并且能容忍这个污点。这3个节点专门用来接入集群外部的7层调度的流量。
而这个Pod在k8s当中有个专门的称呼,叫Ingress Controller。Ingress Controller是一个或一组Pod资源,它通常就是一个应用程序,这个应用程序就是拥有7层代理能力和调度能力的应用程序。目前k8s上的选择通常有4种:Nginx、Traefik、Envoy、HAProxy。最不受待见的就是HAProxy,但是现在在serviceMesh,服务网格中大家倾向的是Envoy。
将来在用Ingress Controller时,我们有3种选择:Nginx、Traefik、Envoy。去做微服务的,大家都比较倾向于Envoy。Traefik本身也是为微服务设计的,动态生成配置。Nginx是后来改造的。
假设这个7层Pod调度多个后端服务,比如有一组Pod提供了电商服务,第二组Pod提供了社交服务,第三组服务提供了API服务,还有一组Pod提供的是论坛服务。这个7层Pod如果来调度这4组http服务。以Nginx为例,这是4组upstream,然后定义4个虚拟主机。假如说这个Pod只有1个IP地址,那么做4个虚拟主机只能根据4个主机名来做了。每个虚拟主机对应后端Pod。这4个虚拟主机要能提供公网能解析的域名。但是假如我们没有那么多主机名,就1个域名怎么办?url映射。每一个location映射不到不同的upstream。

Ingress

现在有个问题,Nginx中upstream中要指定有哪些主机的,现在后端不是主机,Pod是有生命周期的,随时有可能挂掉,起来一个新Pod,新Pod的IP地址就变了。另外后端的Pod可以动态升缩的,改一下,Pod就变多或变少了。
Service是怎么面对这个问题的:Service通过标签选择器关联标签来解决,而且Service始终watch着apiserver中的api,关心自己对应的资源是否发生变动。
这里Nginx怎么办?Nginx是运行在Pod中的,配置是在Pod内部的,upstream对应的Pod变动怎么办?同样的,Ingress Controller,也需要随时watch api当中的后端Pod资源的改变,怎么知道是哪些Pod资源呢?Ingress Controller自己没这个能力,它自己并不知道目前符合自己条件的Pod资源有哪些个,必须要借助于Service来实现。因此我们还是得建Service。每一个upstream服务器组要对应1个Service资源。但是这个Service不是被当做代理时的中间节点,它仅仅是帮忙分组的,知道找哪几个Pod就行了,调度时是不会走Service的。Pod一有变化,Service对应的也变了,这个变化怎么反应到配置文件中呢?依赖于一个专门的资源叫Ingress。Ingress和Ingress Controller是两回事,我们定义一个Ingress的时候要说明,期望Ingress Controller是如何给我们建一个前端,又给我们定义一个后端,就是upstream server,这个upstream server中有几个主机,Ingress是通过这个Service来知道的。
而且Ingress有一个特点,作为一个资源来讲,它可以直接通过边界注入到Ingress Controller里面来,注入并保存为配置文件。而且一旦Ingress发现Service选定的后端Pod发生改变,这个改变一定会及时反应到Ingress中,Ingress会及时注入到Ingress Controller中定义的Pod中,而且还能触发主容器的进程重载配置文件。但是Nginx并不是很适合这种场景,每次修改配置都得重载,traefik、Envoy天生就是为这种设计而生的,只要动了,它自己可以自动加载生效,不需要重载。或者说它可以自己监控着配置文件,改变了就自己重载了。

使用步骤:先是定义Ingress Controller这个Pod,然后去定义Ingress,而后在定义后端Pod生成service。

Ingress资源定义字段

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[root@spark32 ~]# kubectl explain ingress
KIND: Ingress
VERSION: extensions/v1beta1
DESCRIPTION:
Ingress is a collection of rules that allow inbound connections to reach
the endpoints defined by a backend. An Ingress can be configured to give
services externally-reachable urls, load balance traffic, terminate SSL,
offer name based virtual hosting etc. DEPRECATED - This group version of
Ingress is deprecated by networking.k8s.io/v1beta1 Ingress. See the release
notes for more information.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
Spec is the desired state of the Ingress. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
status <Object>
Status is the current state of the Ingress. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status

一级字段依然是这5个。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@spark32 ~]# kubectl explain ingress.spec
KIND: Ingress
VERSION: extensions/v1beta1
RESOURCE: spec <Object>
DESCRIPTION:
Spec is the desired state of the Ingress. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#spec-and-status
IngressSpec describes the Ingress the user wishes to exist.
FIELDS:
backend <Object>
A default backend capable of servicing requests that don't match any rule.
At least one of 'backend' or 'rules' must be specified. This field is
optional to allow the loadbalancer controller or defaulting logic to
specify a global default.
rules <[]Object>
A list of host rules used to configure the Ingress. If unspecified, or no
rule matches, all traffic is sent to the default backend.
tls <[]Object>
TLS configuration. Currently the Ingress only supports a single TLS port,
443. If multiple members of this list specify different hosts, they will be
multiplexed on the same port according to the hostname specified through
the SNI TLS extension, if the ingress controller fulfilling the ingress
supports SNI.

1
2
3
[root@spark32 ~]# kubectl explain ingress.spec.backend
[root@spark32 ~]# kubectl explain ingress.spec.rules
[root@spark32 ~]# kubectl explain ingress.spec.rules.http

官方文档中指明Ingress有很多种类型,要么是基于虚拟主机的,要么是url映射的。
Ingress Controller已经是作为k8s的附件了。整个k8s集群,有master、node和附件组成,一共有4个核心附件,DNS、heasper、dashboard、Ingress Controller。

创建Nginx Ingress Controller

浏览器输入Kubernetes在github上网址,在这个下面可以找到一个项目叫“ingress-nginx”。点击进去:

部署时只会用到deploy/static目录下的文件,我们可以把项目克隆下来,或者直接进deploy/static目录下载这些清单文件。

在部署前,可以先大概了解下,部署Nginx Ingress Controller需要部署哪些资源:

  • namespace.yaml: Nginx Ingress Controller要部署在一个独立的namespace中,可以打开namespace.yaml中看下。
  • configmap.yaml: 用到configMap,从外界为Nginx注入配置,可以查看configmap.yaml中的内容
  • rbac.yaml: 我们使用kubeadm部署的k8s集群默认启用了rbac,这个rbac.yaml文件定义可一个ClusterRole,做一些授权,必要的让这个Ingress Controller拥有访问它本来到达不了的名称空间的权限
  • with-rbac.yaml: 部署Ingress Controller的清单文件,带上了rbac
  • mandatory.yaml: 这个清单文件是把所有资源都组合在了一个文件中。

下载yaml清单文件:

1
2
3
4
5
6
[root@spark32 ~]# cd manifests/
[root@spark32 manifests]# mkdir ingress-nginx
[root@spark32 manifests]# cd ingress-nginx/
[root@spark32 ingress-nginx]# for file in configmap.yaml namespace.yaml rbac.yaml with-rbac.yaml; do wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/$file; done
[root@spark32 ingress-nginx]# ls
configmap.yaml namespace.yaml rbac.yaml with-rbac.yaml

先创建namespace,其他的yaml文件执行顺序没关系:

1
2
[root@spark32 ingress-nginx]# kubectl apply -f namespace.yaml
namespace/ingress-nginx created

创建其他资源:

1
2
3
4
5
6
7
8
9
10
11
[root@spark32 ingress-nginx]# kubectl apply -f ./
configmap/nginx-configuration created
configmap/tcp-services created
configmap/udp-services created
namespace/ingress-nginx unchanged
serviceaccount/nginx-ingress-serviceaccount created
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole created
role.rbac.authorization.k8s.io/nginx-ingress-role created
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding created
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding created
deployment.apps/nginx-ingress-controller created

1
2
3
[root@spark32 ingress-nginx]# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-86449c74bb-jth2m 0/1 ContainerCreating 0 16s

这个Pod内的镜像在国内拉取速度非常慢,等了好久。

1
2
3
[root@spark32 ingress-nginx]# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-86449c74bb-jth2m 1/1 Running 0 83m

可以提前手动拉取:

1
docker pull quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.25.0

目前 Nginx Ingress Controller 已经有自己项目的官方站点了: https://kubernetes.github.io/ingress-nginx/

对于通用部署命令中,只需要执行一条命令即可:

1
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/mandatory.yaml

这个mandatory.yaml就是将所有需要的资源定义在了这一个清单文件中了。我这里部署在了虚拟机上,还需要看下裸机章节中需要执行什么额外操作:

如果不加这个service nodeport,你会发现你的Ingress Controller部署完以后,只能在集群内部访问,因为Ingress Controller无法接入集群外部的流量,为了接入流量,我们得为它做一个NodePort类型的service。
有两种方案让Ingress Controller接入外部流量:

  • 一是创建一个NodePort类型的Service,其标签选择器选中Ingress Controller这个Pod
  • 二是部署Ingress Controller这个Pod时,设置其共享节点网络名称空间,这个方案需要我们在创建Ingress Controller前修改with-rbac.yaml文件。修改如下几个地方:
    • 将一级字段kind的值从Deployment改为DaemonSet
    • 将一级字段spec下的replicas字段删除
    • 在spec.template.spec下加一个字段 hostNetwork: true

我这里选择第一种方案,那就还需要为Ingress Controller创建一个NodePort类型的service,不然无法在集群外访问。

1
2
[root@spark32 ingress-nginx]# wget https://raw.githubusercontent.com/kubernetes/ingress-nginx/master/deploy/static/provider/baremetal/service-nodeport.yaml
[root@spark32 ingress-nginx]# vim service-nodeport.yaml


1
2
3
4
[root@spark32 ingress-nginx]# kubectl apply -f .
[root@spark32 ingress-nginx]# kubectl get svc -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx NodePort 10.110.228.115 <none> 80:30080/TCP,443:30443/TCP 24s

创建Ingress

看看k8s官网上Ingress的定义示例,打开网址:https://kubernetes.io/docs/concepts/services-networking/ingress/ ,有个最简单的示例如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /testpath
backend:
serviceName: test
servicePort: 80

其中,annotations至关重要,必须要定义。在这个annotations中必须指明我们用的Ingress Controller类型是Nginx。

示例1

下面定义Ingress,将myapp-deploy里的Pod通过Nginx Ingress Controller向外提供服务。

1
2
3
4
[root@spark32 manifests]# kubectl get deploy
NAME READY UP-TO-DATE AVAILABLE AGE
myapp-deploy 5/5 5 5 5d4h
redis 1/1 1 1 2d4h

已经有一个Deployment了,为这个Deployment创建个Service:

1
2
3
4
5
6
7
8
[root@spark32 manifests]# kubectl expose deploy myapp-deploy --name=myapp --port=80 --target-port=80
service/myapp exposed
[root@spark32 manifests]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 130d
myapp ClusterIP 10.103.20.216 <none> 80/TCP 8s
myapp-svc-headless ClusterIP None <none> 80/TCP 2d3h
redis ClusterIP 10.97.97.97 <none> 6379/TCP 2d4h

在定义Ingress时需要加一个注解,说明接下来要用到的Ingress Controller是Nginx,而不是Traefik和Envoy。要靠这个Annotation来识别的。他才能转换成对应的与Controller相匹配的规则。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@spark32 ingress-nginx]# cd ..
[root@spark32 manifests]# mkdir ingress
[root@spark32 manifests]# cd ingress
[root@spark32 ingress]# vim ingress-myapp.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-myapp
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: myapp.wisedu.com
http:
paths:
- path: /
backend:
serviceName: myapp
servicePort: 80
[root@spark32 ingress]# kubectl apply -f ingress-myapp.yaml
ingress.extensions/ingress-myapp created
[root@spark32 ingress]# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-myapp myapp.wisedu.com 80 5s



去看看配置有没有加到Nginx上:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
[root@spark32 manifests]# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-86449c74bb-jth2m 1/1 Running 0 172m
[root@spark32 manifests]# kubectl exec nginx-ingress-controller-86449c74bb-jth2m -n ingress-nginx -it -- /bin/sh
$ ls
geoip lua mime.types modsecurity modules nginx.conf opentracing.json owasp-modsecurity-crs template
$ ps -ef|grep nginx
www-data 1 0 0 07:17 ? 00:00:00 /usr/bin/dumb-init -- /nginx-ingress-controller --configmap=ingress-nginx/nginx-configuration --tcp-services-configmap=ingress-nginx/tcp-services --udp-services-configmap=ingress-nginx/udp-services --publish-service=ingress-nginx/ingress-nginx --annotations-prefix=nginx.ingress.kubernetes.io
www-data 6 1 0 07:17 ? 00:00:11 /nginx-ingress-controller --configmap=ingress-nginx/nginx-configuration --tcp-services-configmap=ingress-nginx/tcp-services --udp-services-configmap=ingress-nginx/udp-services --publish-service=ingress-nginx/ingress-nginx --annotations-prefix=nginx.ingress.kubernetes.io
www-data 35 6 0 07:17 ? 00:00:00 nginx: master process /usr/local/openresty/nginx/sbin/nginx -c /etc/nginx/nginx.conf
www-data 320 35 0 08:54 ? 00:00:00 nginx: worker process
www-data 321 35 0 08:54 ? 00:00:00 nginx: worker process
www-data 322 35 0 08:54 ? 00:00:00 nginx: worker process
www-data 323 35 0 08:54 ? 00:00:00 nginx: worker process
www-data 324 35 0 08:54 ? 00:00:00 nginx: worker process
www-data 325 35 0 08:54 ? 00:00:00 nginx: worker process
www-data 326 35 0 08:54 ? 00:00:00 nginx: worker process
www-data 327 35 0 08:54 ? 00:00:00 nginx: worker process
www-data 600 593 0 09:03 pts/0 00:00:00 grep nginx
$ cat /etc/nginx/nginx.conf



下面修改下客户端hosts文件,我这里修改自己笔记本的/etc/hosts文件,把myapp.wisedu.com这个域名配置解析成k8s集群中所有节点的IP地址,我这里node节点有3个:

1
2
3
4
172.16.206.17 myapp.wisedu.com
172.16.206.31 myapp.wisedu.com
172.16.206.16 myapp.wisedu.com
172.16.206.32 myapp.wisedu.com

打开浏览器访问:

示例2

1、创建Deployment和Service(该示例中我使用的是headless Service)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@spark32 manifests]# vim tomcat-deploy.yaml
apiVersion: v1
kind: Service
metadata:
name: tomcat
namespace: default
spec:
selector:
app: tomcat
release: canary
clusterIP: None
ports:
- name: http
port: 8080
targetPort: 8080
- name: ajp
port: 8009
targetPort: 8009
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: tomcat-deploy
namespace: default
spec:
replicas: 3
selector:
matchLabels:
app: tomcat
release: canary
template:
metadata:
labels:
app: tomcat
release: canary
spec:
containers:
- name: myapp
image: tomcat:8.5.32-jre8-alpine
ports:
- name: http
containerPort: 8080
- name: ajp
containerPort: 8009

1
2
3
[root@spark32 manifests]# kubectl apply -f tomcat-deploy.yaml
service/tomcat created
deployment.apps/tomcat-deploy created

2、创建Ingress

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@spark32 ingress]# vim ingress-tomcat.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-tomcat
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: tomcat.wisedu.com
http:
paths:
- path: /
backend:
serviceName: tomcat
servicePort: 8080
[root@spark32 ingress]# kubectl apply -f ingress-tomcat.yaml
ingress.extensions/ingress-tomcat created


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@spark32 ingress]# kubectl describe ingress ingress-tomcat
Name: ingress-tomcat
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
tomcat.wisedu.com
/ tomcat:8080 (10.244.1.21:8080,10.244.2.56:8080,10.244.3.9:8080)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-tomcat","namespace":"default"},"spec":{"rules":[{"host":"tomcat.wisedu.com","http":{"paths":[{"backend":{"serviceName":"tomcat","servicePort":8080},"path":"/"}]}}]}}
kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 90s nginx-ingress-controller Ingress default/ingress-tomcat
Normal UPDATE 31s nginx-ingress-controller Ingress default/ingress-tomcat

3、配置hosts文件
在要访问这个服务的客户端机器上配置hosts文件,我这里配置的是:

1
2
3
4
172.16.206.17 myapp.wisedu.com tomcat.wisedu.com
172.16.206.31 myapp.wisedu.com tomcat.wisedu.com
172.16.206.16 myapp.wisedu.com tomcat.wisedu.com
172.16.206.32 myapp.wisedu.com tomcat.wisedu.com

打开浏览器访问:

示例3:HTTPS

需要证书和私钥,而这个证书和私钥我们还需要创建为特定格式才能提供给Ingress。不是说做好证书贴到Nginx的Pod里面去就可以了,需要先转成特殊格式,这个特殊的格式叫secret。它也是个标准的k8s对象。secret也是被注入到Pod中,被Pod所引用。
这里就不做CA了,直接做一个自签发的证书。

1、生成私钥和自签发证书

1
2
3
4
5
6
7
8
9
10
[root@spark32 ingress]# openssl genrsa -out tls.key 2048
Generating RSA private key, 2048 bit long modulus
.......+++
..........................................................................................................+++
e is 65537 (0x10001)
[root@spark32 ingress]# ls
ingress-myapp.yaml ingress-tomcat.yaml tls.key
[root@spark32 ingress]# openssl req -new -x509 -key tls.key -out tls.crt -subj /C=CN/ST=Beijing/O=DevOps/CN=tomcat.wisedu.com
[root@spark32 ingress]# ls
ingress-myapp.yaml ingress-tomcat.yaml tls.crt tls.key

2、创建secret

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@spark32 ingress]# kubectl create secret tls tomcat-ingress-secret --cert=tls.crt --key=tls.key
secret/tomcat-ingress-secret created
[root@spark32 ingress]# kubectl get secret
NAME TYPE DATA AGE
default-token-wb7fl kubernetes.io/service-account-token 3 131d
tomcat-ingress-secret kubernetes.io/tls 2 8s
[root@spark32 ingress]# kubectl describe secret tomcat-ingress-secret
Name: tomcat-ingress-secret
Namespace: default
Labels: <none>
Annotations: <none>
Type: kubernetes.io/tls
Data
====
tls.crt: 1245 bytes
tls.key: 1675 bytes

3、创建Ingress,包含这个secret

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@spark32 manifests]# kubectl explain ingress.spec
...
tls <[]Object>
TLS configuration. Currently the Ingress only supports a single TLS port,
443. If multiple members of this list specify different hosts, they will be
multiplexed on the same port according to the hostname specified through
the SNI TLS extension, if the ingress controller fulfilling the ingress
supports SNI.
...
[root@spark32 manifests]# kubectl explain ingress.spec.tls
...
FIELDS:
hosts <[]string>
Hosts are a list of hosts included in the TLS certificate. The values in
this list must match the name/s used in the tlsSecret. Defaults to the
wildcard host setting for the loadbalancer controller fulfilling this
Ingress, if left unspecified.
secretName <string>
SecretName is the name of the secret used to terminate SSL traffic on 443.
Field is left optional to allow SSL routing based on SNI hostname alone. If
the SNI host in a listener conflicts with the "Host" header field used by
an IngressRule, the SNI host is used for termination and value of the Host
header is used for routing.
...

表示把哪个主机做成tls,并且用哪个secret来获取证书和私钥等相关信息。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@spark32 ingress]# cp ingress-tomcat.yaml ingress-tomcat-tls.yaml
[root@spark32 ingress]# vim ingress-tomcat-tls.yaml
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-tomcat-tls
namespace: default
annotations:
kubernetes.io/ingress.class: "nginx"
spec:
rules:
- host: tomcat.wisedu.com
http:
paths:
- path: /
backend:
serviceName: tomcat
servicePort: 8080
tls:
- hosts:
- tomcat.wisedu.com
secretName: tomcat-ingress-secret

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@spark32 ingress]# kubectl apply -f ingress-tomcat-tls.yaml
ingress.extensions/ingress-tomcat-tls created
[root@spark32 ingress]# kubectl get ingress
NAME HOSTS ADDRESS PORTS AGE
ingress-myapp myapp.wisedu.com 80 16h
ingress-tomcat tomcat.wisedu.com 80 32m
ingress-tomcat-tls tomcat.wisedu.com 80, 443 9s
[root@spark32 ingress]# kubectl describe ingress ingress-tomcat-tls
Name: ingress-tomcat-tls
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
tomcat-ingress-secret terminates tomcat.wisedu.com
Rules:
Host Path Backends
---- ---- --------
tomcat.wisedu.com
/ tomcat:8080 (10.244.1.21:8080,10.244.2.56:8080,10.244.3.9:8080)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx"},"name":"ingress-tomcat-tls","namespace":"default"},"spec":{"rules":[{"host":"tomcat.wisedu.com","http":{"paths":[{"backend":{"serviceName":"tomcat","servicePort":8080},"path":"/"}]}}],"tls":[{"hosts":["tomcat.wisedu.com"],"secretName":"tomcat-ingress-secret"}]}}
kubernetes.io/ingress.class: nginx
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 7s nginx-ingress-controller Ingress default/ingress-tomcat-tls

查看Nginx Ingress Controller中的配置:

1
2
3
4
5
[root@spark32 manifests]# kubectl get pods -n ingress-nginx
NAME READY STATUS RESTARTS AGE
nginx-ingress-controller-86449c74bb-bw9fv 1/1 Running 0 16h
[root@spark32 manifests]# kubectl exec nginx-ingress-controller-86449c74bb-bw9fv -n ingress-nginx -it -- /bin/sh
$ cat nginx.conf | more


浏览器访问https: