这篇文章是我草稿,方便未来copy,也没有什么结论。
集群网络相关组件使用 iptables,calico。集群信息如下:
常用命令:
iptables -S -t filter
iptables -nvL -t filter
具体kube-proxy使用了何种模式,可以通过开头的日志中确定。
kubectl logs -n kube-system kube-proxy-xxxx
1 I1013 02:49:50.053115 1 node.go:135] Successfully retrieved node IP: 10.19.0.56
2 I1013 02:49:50.053178 1 server_others.go:172] Using ipvs Proxier.
3 W1013 02:49:50.053393 1 proxier.go:420] IPVS scheduler not specified, use rr by default
4 I1013 02:49:50.053601 1 server.go:571] Version: v1.17.2
filter
链条:
-N KUBE-FIREWALL
-N KUBE-FORWARD
-N cali-FORWARD
-N cali-INPUT
-N cali-OUTPUT
-N cali-failsafe-in
-N cali-failsafe-out
-N cali-forward-check
-N cali-forward-endpoint-mark
-N cali-from-endpoint-mark
-N cali-from-hep-forward
-N cali-from-host-endpoint
-N cali-from-wl-dispatch
-N cali-from-wl-dispatch-0
-N cali-fw-cali01b67da5b30
-N cali-fw-cali01f05402926
-N cali-fw-cali07f2c47454d
-N cali-fw-cali09a918145b4
-N cali-fw-cali20c7068a020
-N cali-fw-calie74a6684a50
-N cali-pri-_b689sAm1phz5c-Iq03
-N cali-pri-_hNSGmJYNT8uLIzxesP
-N cali-pri-_mel_KhKBhu1g7PDmvg
-N cali-pri-_rXtG25Noohen7RWUxB
-N cali-pri-kns.default
-N cali-pri-kns.kube-system
-N cali-pri-kns.monitor
-N cali-pri-ksa.default.default
-N cali-pri-ksa.monitor.default
-N cali-pro-_b689sAm1phz5c-Iq03
-N cali-pro-_hNSGmJYNT8uLIzxesP
-N cali-pro-_mel_KhKBhu1g7PDmvg
-N cali-pro-_rXtG25Noohen7RWUxB
-N cali-pro-kns.default
-N cali-pro-kns.kube-system
-N cali-pro-kns.monitor
-N cali-pro-ksa.default.default
-N cali-pro-ksa.monitor.default
-N cali-set-endpoint-mark
-N cali-set-endpoint-mark-0
-N cali-sm-cali01b67da5b30
-N cali-sm-cali01f05402926
-N cali-sm-cali07f2c47454d
-N cali-sm-cali09a918145b4
-N cali-sm-cali20c7068a020
-N cali-sm-calie74a6684a50
-N cali-to-hep-forward
-N cali-to-host-endpoint
-N cali-to-wl-dispatch
nat
三个链条
PREROUTING:
KUBE-FIREWALL:
KUBE-MARK-DROP
OUTPUT:
KUBE-SERVICES:
KUBE-MARK-MASQ:
-A KUBE-MARK-MASQ -j MARK --set-xmark 0x4000/0x4000
KUBE-NODE-PORT
POSTROUTING:
KUBE-POSTROUTING:
-A KUBE-POSTROUTING -m comment --comment "kubernetes service traffic requiring SNAT" -m mark --mark 0x4000/0x4000 -j MASQUERADE
-A KUBE-POSTROUTING -m comment --comment "Kubernetes endpoints dst ip:port, source ip for solving hairpin purpose" -m set --match-set KUBE-LOOP-BACK dst,dst,src -j MASQUERADE
其它:
KUBE-LOAD-BALANCER:(无引用)
KUBE-MARK-MASQ
创建一个deployment和service:
iptables -S -t filter > sfilter
iptables -nvL -t filter > filter
iptables -S -t nat > snat
iptables -nvL -t nat > nat
kubectl create ns kelu
kubectl run cka2 --image=nginx --port=80 --expose=true -n kelu
iptables -S -t filter > sfilter2
iptables -nvL -t filter > filter2
iptables -S -t nat > snat2
iptables -nvL -t nat > nat2
从文件分析可知,新增的内容在 iptables 中主要体现在主机 rqkubedev02 上的 filter表,nat 表和其它机器上的nat+filter表没有变化:
主要多了这些内容:
-N cali-fw-calida47883c356
-N cali-pri-kns.kelu
-N cali-pri-ksa.kelu.default
-N cali-pro-kns.kelu
-N cali-pro-ksa.kelu.default
-N cali-sm-calida47883c356
-N cali-tw-calida47883c356
这里出现了几个关键字
在主机的interface中也可以找到这张网卡:
calida47883c356
1062: calida47883c356@if4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1440 qdisc noqueue state UP
link/ether ee:ee:ee:ee:ee:ee brd ff:ff:ff:ff:ff:ff link-netnsid 5
inet6 fe80::ecee:eeff:feee:eeee/64 scope link
valid_lft forever preferred_lft forever
通过注释和关联关系,可以猜出来应该是用来做policy相关的注释。
查看此台主机上的nginx容器:
docker ps | grep nginx
cbe00452d31e nginx "/docker-entrypoint.…" 22 minutes ago Up 22 minutes k8s_cka2_cka2-75dbf7c54-gd27w_kelu_7f813a95-6359-4e7d-9d88-d216c8a39457_0
偶尔运行/保存iptables一下看看:
iptables -S -t filter > sfilter3
iptables -nvL -t filter > filter3
iptables -S -t nat > snat3
iptables -nvL -t nat > nat3
iptables -S -t raw > sraw3
iptables -nvL -t raw > raw3
iptables -S -t mangle > smangle3
iptables -nvL -t mangle > mangle3
iptables -S -t filter > sfilter4
iptables -nvL -t filter > filter4
iptables -S -t nat > snat4
iptables -nvL -t nat > nat4
iptables -S -t raw > sraw4
iptables -nvL -t raw > raw4
iptables -S -t mangle > smangle4
iptables -nvL -t mangle > mangle4
vimdiff sfilter3 sfilter4
vimdiff snat3 snat4
vimdiff sraw3 sraw4
vimdiff smangle3 smangle4
创建calico global network policy
cat global_network_policy.yaml
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: deny-cka
spec:
selector: run == 'cka2'
types:
- Ingress
- Egress
ingress:
egress:
对filter、raw和mangle有改变,如下:
-N cali-pi-default.deny-cka
-N cali-po-default.deny-cka
往里面添数据看看:
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: deny-cka
spec:
selector: run == 'cka2'
types:
- Ingress
# - Egress
ingress:
- action: Allow
protocol: TCP
source:
selector: color == 'blue'
destination:
ports:
- 80
# egress:
iptables:
iptables -S -t filter > sfilter5
iptables -nvL -t filter > filter5
iptables -S -t nat > snat5
iptables -nvL -t nat > nat5
iptables -S -t raw > sraw5
iptables -nvL -t raw > raw5
iptables -S -t mangle > smangle5
iptables -nvL -t mangle > mangle5
vimdiff sfilter5 sfilter4
vimdiff snat5 snat4
vimdiff sraw5 sraw4
vimdiff smangle5 smangle4
删除了 egress,对filter、raw和mangle有改变,如下:
filter少了egress相关的filter,多了ingress的内容。
此时从外部和内部都已经没办法访问pod了。
运行新的deployment,打上标签 color == ‘blue’ 。
iptables -S -t filter > sfilter6
iptables -nvL -t filter > filter6
iptables -S -t nat > snat6
iptables -nvL -t nat > nat6
iptables -S -t raw > sraw6
iptables -nvL -t raw > raw6
iptables -S -t mangle > smangle6
iptables -nvL -t mangle > mangle6
vimdiff sfilter5 sfilter6
vimdiff snat5 snat6
vimdiff sraw5 sraw6
vimdiff smangle5 smangle6
只改变了 filter 表: