Fork me on GitHub
Fork me on GitHub

基于canel的网络策略

flannel、calico和canel

calico的配置依赖于对BGP等协议的理解,我们这里不去使用calico作为网络插件提供网络功能了,而是把重点集中在calico如何提供网络策略,因为flannel本身提供不了网络策略,而flannel和calico二者已经合二为一了,有个新项目叫canel。可以在flannel提供网络功能的基础之上,在额外提供calico,去服务于网络策略。
calico默认使用的网段不是10.244,calico如果被你拿来作网络插件使用的话,它工作在192.168.0.0网络,而且是16位掩码,分网络时就是192.168.0.0,192.168.1.0这样去给节点分网络。

部署calico

在kubernetes上部署calico有多种方式,在calico官方文档上有详细的介绍。
这里是在kubernetes上安装calico,我们需要参照的安装是:

calico对集群中所有的地址分配,自己不记录,而是借助于etcd记录的。这事就比较麻烦了,k8s自己有一套etcd集群,calico也需要一套etcd集群,这样对k8s工程师不是个好事情。后来etcd支持不把数据放在自己专用的etcd中,而是调apiserver的功能,直接把所有的设置都发给apiserver,由apiserver存储在etcd中。因为k8s任何节点,任何功能、任何组件都不能直接写k8s的etcd,必须通过apiserver写,因为主要是确保数据一致性的。
对我们来说最简单的方式是使用k8s的etcd,这也是官方推荐的方式。

部署canel:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
[root@spark32 manifests]# mkdir networkpolicy
[root@spark32 manifests]# cd networkpolicy/
[root@spark32 networkpolicy]# wget https://docs.projectcalico.org/v3.9/manifests/canal.yaml
[root@spark32 networkpolicy]# kubectl apply -f canal.yaml
configmap/canal-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrole.rbac.authorization.k8s.io/flannel configured
clusterrolebinding.rbac.authorization.k8s.io/canal-flannel created
clusterrolebinding.rbac.authorization.k8s.io/canal-calico created
daemonset.apps/canal created
serviceaccount/canal created
[root@spark32 flannel]# kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
canal-p9v5f 0/2 Init:1/2 0 80s
canal-sksfk 0/2 Init:1/2 0 80s
canal-zchqq 0/2 Init:1/2 0 80s
canal-zgwsq 0/2 Init:1/2 0 80s
coredns-fb8b8dccf-t2xj8 1/1 Running 0 158d
coredns-fb8b8dccf-xdjkx 1/1 Running 0 158d
etcd-spark32 1/1 Running 1 158d
kube-apiserver-spark32 1/1 Running 0 158d
kube-controller-manager-spark32 1/1 Running 0 158d
kube-flannel-ds-amd64-2mcd2 1/1 Running 0 103m
kube-flannel-ds-amd64-jsx8s 1/1 Running 0 103m
kube-flannel-ds-amd64-kk8cx 1/1 Running 0 103m
kube-flannel-ds-amd64-vdjn4 1/1 Running 0 103m
kube-proxy-czhfc 1/1 Running 0 158d
kube-proxy-gzhqw 1/1 Running 0 41d
kube-proxy-rfxjw 1/1 Running 0 158d
kube-proxy-xmkxq 1/1 Running 0 158d
kube-scheduler-spark32 1/1 Running 0 158d
kubernetes-dashboard-5f7b999d65-ngrr7 1/1 Running 0 2d3h

等待pod运行起来。

示例

所谓的网络策略(Network Policy),通过Egress和Ingress规则分别来控制不同的通信需求,Egress表示出站,Ingress表示入站。这里的Ingress和此前讲的Ingress规则是两回事。Egress表示Pod作为客户端去访问别人,或者是响应别人,就是自己作为源地址,对方作为目标地址来通信的。Ingress表示自己是目标地址,远程是源地址。
当定义Egress规则时,Pod作为客户端出站的时候去请求别人,自己的地址是固定的,端口是随机的,服务端的地址和端口一般是可预测的;当定义Ingress规则时,对方Pod来请求自己,自己的地址和端口是固定的,对方的地址和端口是没法预测的。所以我们去定义规则时,如果定义的是Egress规则,也就是出站规则,我们可以定义目标地址和目标端口,如果定义的是Ingress规则,也就是入站规则,能限制对方的地址和自己的端口。这种限制是针对哪一个Pod来说的呢?使用podSelector去选择Pod。
将来,如果每个名称空间托管了不同的项目,甚至不同客户的项目,可以给名称空间设置默认策略,在一个名称空间内,所有Pod可以无障碍通信,但是跨名称空间都不被允许。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@spark32 manifests]# kubectl explain networkpolicy
KIND: NetworkPolicy
VERSION: extensions/v1beta1
DESCRIPTION:
DEPRECATED 1.9 - This group version of NetworkPolicy is deprecated by
networking/v1/NetworkPolicy. NetworkPolicy describes what network traffic
is allowed for a set of Pods
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/api-conventions.md#types-kinds
metadata <Object>
Standard object's metadata. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata
spec <Object>
Specification of the desired behavior for this NetworkPolicy.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[root@spark32 manifests]# kubectl explain networkpolicy.spec
KIND: NetworkPolicy
VERSION: extensions/v1beta1
RESOURCE: spec <Object>
DESCRIPTION:
Specification of the desired behavior for this NetworkPolicy.
DEPRECATED 1.9 - This group version of NetworkPolicySpec is deprecated by
networking/v1/NetworkPolicySpec.
FIELDS:
egress <[]Object>
List of egress rules to be applied to the selected pods. Outgoing traffic
is allowed if there are no NetworkPolicies selecting the pod (and cluster
policy otherwise allows the traffic), OR if the traffic matches at least
one egress rule across all of the NetworkPolicy objects whose podSelector
matches the pod. If this field is empty then this NetworkPolicy limits all
outgoing traffic (and serves solely to ensure that the pods it selects are
isolated by default). This field is beta-level in 1.8
ingress <[]Object>
List of ingress rules to be applied to the selected pods. Traffic is
allowed to a pod if there are no NetworkPolicies selecting the pod OR if
the traffic source is the pod's local node, OR if the traffic matches at
least one ingress rule across all of the NetworkPolicy objects whose
podSelector matches the pod. If this field is empty then this NetworkPolicy
does not allow any traffic (and serves solely to ensure that the pods it
selects are isolated by default).
podSelector <Object> -required-
Selects the pods to which this NetworkPolicy object applies. The array of
ingress rules is applied to any pods selected by this field. Multiple
network policies can select the same set of pods. In this case, the ingress
rules for each are combined additively. This field is NOT optional and
follows standard label selector semantics. An empty podSelector matches all
pods in this namespace.
policyTypes <[]string>
List of rule types that the NetworkPolicy relates to. Valid options are
"Ingress", "Egress", or "Ingress,Egress". If this field is not specified,
it will default based on the existence of Ingress or Egress rules; policies
that contain an Egress section are assumed to affect Egress, and all
policies (whether or not they contain an Ingress section) are assumed to
affect Ingress. If you want to write an egress-only policy, you must
explicitly specify policyTypes [ "Egress" ]. Likewise, if you want to write
a policy that specifies that no egress is allowed, you must specify a
policyTypes value that include "Egress" (since such a policy would not
include an Egress section and would otherwise default to just [ "Ingress"
]). This field is beta-level in 1.8

字段解释:

  • egress <[]Object>
  • ingress <[]Object>
  • podSelector -required-
    为{} 表示选择指定名称空间中的所有Pod
  • policyTypes <[]string>
    策略类型:假如当前策略中既定义了egress,又定义了Ingress,哪个生效呢?它两其实不冲突,一个控制出站,一个控制入站。但也可以在某个时刻只让一个方向的规则生效。policyTypes就是干这个事情的。给哪个就代表哪个生效,如果都给了,代表都生效。

假如定义规则的时候只写了Igress,而policyTypes中既写了Egress,又写了Ingress。Egress没定义啊,表示Egress默认规则生效。如果Egress默认规则是拒绝的,那么所有的出站都是拒绝的,如果Egress默认规则是允许的,那么所有的出站都是允许的。

1
2
3
4
[root@spark32 networkpolicy]# kubectl create ns dev
namespace/dev created
[root@spark32 networkpolicy]# kubectl create ns prod
namespace/prod created
1
2
3
4
5
6
7
8
9
[root@spark32 networkpolicy]# vim networkpolicy-demo.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
spec:
podSelector: {}
policyTypes:
- Ingress

以上这种定义,表示Ingress规则生效,但是没有任何显式定义的规则,就意味着默认是拒绝所有的,而policyTypes里没有加Egress,意味着Egress默认是允许所有的。
在dev环境创建上面定义的networkpolicy:

1
2
3
4
5
[root@spark32 networkpolicy]# kubectl apply -f networkpolicy-demo.yaml -n dev
networkpolicy.networking.k8s.io/deny-all-ingress created
[root@spark32 networkpolicy]# kubectl get netpol -n dev
NAME POD-SELECTOR AGE
deny-all-ingress <none> 52s

此时dev名称空间里还没有Pod,创建Pod:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@spark32 networkpolicy]# vim pod-demo.yaml
apiVersion: v1
kind: Pod
metadata:
name: netpol-demo1
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
[root@spark32 networkpolicy]# kubectl apply -f pod-demo.yaml -n dev
pod/netpol-demo1 created
[root@spark32 networkpolicy]# kubectl get pods -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
netpol-demo1 1/1 Running 0 13s 10.244.2.3 ubuntu31 <none> <none>

访问下这个Pod内的服务,发现不能访问:

1
[root@spark32 networkpolicy]# curl 10.244.2.3

现在修改dev名称空间的策略,允许所有入站。

1
2
3
4
5
6
7
8
9
10
11
12
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress
spec:
podSelector: {}
ingress:
- {}
policyTypes:
- Ingress
[root@spark32 networkpolicy]# kubectl apply -f networkpolicy-demo.yaml -n dev
networkpolicy.networking.k8s.io/deny-all-ingress configured


现在就可以访问了:

1
2
[root@spark32 networkpolicy]# curl 10.244.2.3
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>

放心特定的入站流量:只允许访问dev名称空间的一组pod。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@spark32 networkpolicy]# kubectl label pod netpol-demo1 app=myapp -n dev
pod/netpol-demo1 labeled
[root@spark32 networkpolicy]# vim networkpolicy-allow-myapp-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-myapp-ingress
spec:
podSelector:
matchLabels:
app: myapp
ingress:
- from:
- ipBlock:
cidr: 10.244.0.0/16
except:
- 10.244.1.2/32
ports:
- protocol: TCP
port: 80
policyTypes:
- Ingress


1
2
3
4
5
6
[root@spark32 networkpolicy]# kubectl apply -f networkpolicy-allow-myapp-ingress.yaml -n dev
networkpolicy.networking.k8s.io/allow-myapp-ingress created
[root@spark32 networkpolicy]# kubectl get netpol -n dev
NAME POD-SELECTOR AGE
allow-myapp-ingress app=myapp 2s
deny-all-ingress <none> 30m

访问:

1
2
3
4
[root@spark32 networkpolicy]# curl 10.244.2.3
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@spark32 networkpolicy]# curl 10.244.2.3:443
curl: (7) Failed connect to 10.244.2.3:443; Connection refused

但是443还是能访问,把deny-all-ingress这个策略删除掉,这个策略里放开了所有的入站:

1
2
3
[root@spark32 networkpolicy]# kubectl delete -f networkpolicy-demo.yaml -n dev
networkpolicy.networking.k8s.io "deny-all-ingress" deleted
[root@spark32 networkpolicy]# curl 10.244.2.3:443

这下就访问不了443了。
在dev创建另一个Pod,看看这个规则是否对这个pod生效:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@spark32 networkpolicy]# vim pod-demo2.yaml
apiVersion: v1
kind: Pod
metadata:
name: netpol-demo2
spec:
containers:
- name: myapp
image: ikubernetes/myapp:v1
[root@spark32 networkpolicy]# kubectl apply -f pod-demo2.yaml -n dev
[root@spark32 networkpolicy]# kubectl get pods -n dev -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
netpol-demo1 1/1 Running 0 36m 10.244.2.3 ubuntu31 <none> <none>
netpol-demo2 1/1 Running 0 9s 10.244.2.4 ubuntu31 <none> <none>
[root@spark32 networkpolicy]# curl 10.244.2.4
Hello MyApp | Version: v1 | <a href="hostname.html">Pod Name</a>
[root@spark32 networkpolicy]# curl 10.244.2.4:443
curl: (7) Failed connect to 10.244.2.4:443; Connection refused

很显然,上面设置的策略只对podSelector选中的pod生效。

官方文档示例:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: test-network-policy
namespace: default
spec:
podSelector:
matchLabels:
role: db
policyTypes:
- Ingress
- Egress
ingress:
- from:
- ipBlock:
cidr: 172.17.0.0/16
except:
- 172.17.1.0/24
- namespaceSelector:
matchLabels:
project: myproject
- podSelector:
matchLabels:
role: frontend
ports:
- protocol: TCP
port: 6379
egress:
- to:
- ipBlock:
cidr: 10.0.0.0/24
ports:
- protocol: TCP
port: 5978

其中,ingree的规则如下:对于含有标签”role=db”的Pod的TCP的5978端口

  • default名称空间下的含有标签 “role=frontend” 的Pod可以访问
  • 含有标签 “project=myproject” 名称空间下的Pod可以访问
  • IP地址在 172.17.0.0/16,除了172.17.1.0/24网络内的,可以访问

如果pod作为客户端的话,应该允许访问外面所有的服务,出站应该放行所有。入站控制哪些进来就行。
如果想苛刻一点,就是拒绝所有入站,拒绝所有出站,单独放行。但是拒绝所有出站,拒绝所有入站会有问题,会导致同一个名称空间中被同一个podSelector选中的Pod与Pod也没法通信了。所以设置完拒绝所有入站,拒绝所有出站后,还应该加上一条,本名称空间出,然后又到本名称空间的pod是被允许的。就是放行同一个名称空间的pod之间通信。这样内部通信就没问题。