Kubernetes网络
# Kubernetes网络
# Service
# Service存在的意义
service是k8s中的一个重要概念,主要是提供负载均衡和服务
防止Pod失联
定义一组Pod的访问策略
为了解决以下两个问题,K8s引入Service
Pod的IP不固定:
- 增加一个控制器负责动态获取Pod的列表IP,动态更新到负载均衡器配置。
Pod是多副本
- 前面增加一个负载均衡器
# Pod与Service的关系
通过label-selector相关联
通过Service实现Pod的负载均衡(TCP/UDP 4层)
# Service的三种类型
ClusterIP:集群内部使用
NodePort:对外暴露应用
LoadBalancer:对外暴露应用,适用公有云
ClusterIP
:默认,分配一个稳定的IP地址,即VIP,只能在集群内 部访问(同Namespace内的Pod)。NodePort
:在每个节点上启用一个端口来暴露服务,可以在集群 外部访问。也会分配一个稳定内部集群IP地址。 访问地址:: 端口范围:30000-32767。LoadBalancer
:与NodePort类似,在每个节点上启用一个端口来暴 露服务。除此之外,Kubernetes会请求底层云平台上的负载均衡器, 将每个Node([NodeIP]:[NodePort])作为后端添加进去。
ClusterIP的用法
- 查看deployment
[root@master ~]# kubectl get deployment
NAME READY UP-TO-DATE AVAILABLE AGE
web 1/1 1 1 71s
- 快速生成一个绑定deployment的web的service端口开放
[root@master ~]# kubectl expose deployment web --port=80 --target-port=80 --type=ClusterIP --dry-run -o yaml > service.yaml
[root@master ~]# cat service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web
spec:
type: ClusterIP #这里是指定类型,默认是ClusterIP
ports:
- port: 80 #service(负载均衡器)端口,只能再k8s集群内部访问,(node和Pod)
protocol: TCP #端口的协议
targetPort: 80 #容器中提供服务的端口
selector: #标签选择器,用来关联pod
app: web
- 运行service的yaml文件
[root@master ~]# kubectl apply -f service.yaml
service/web created
[root@master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29h
web ClusterIP 10.107.119.3 <none> 80/TCP 5s
- 访问service的地址
[root@master ~]# curl 10.107.119.3
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
NodePort的用法
- 修改service的端口类型
[root@master ~]# cat service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web
spec:
type: NodePort #暴露端口类型用NodePort
ports:
- port: 80
protocol: TCP
targetPort: 80
nodePort: 30002 #可以指定一个暴露端口,也可以默认生成
selector:
app: web
- 查看service更新的端口并且访问
[root@master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 29h
web NodePort 10.111.240.72 <none> 80:30002/TCP 8s
[root@master ~]# curl 192.168.1.104:30002
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
# Service代理模式
Service的几种代理
userspace
iptables (默认)
ipvs (lvs)
iptables默认方式
- 部署一个ClusterIP的方式暴露端口的Pod
[root@master ~]# cat service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: web
name: web
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: web
type: ClusterIP
[root@master ~]# kubectl apply -f service.yaml
service/web unchanged
[root@master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h20m
web ClusterIP 10.101.63.199 <none> 80/TCP 21m
- 查看默认的iptables代理是怎么实现负载均衡的机制
[root@master ~]# iptables-save | grep 10.101.63.199
#第一步
-A KUBE-SERVICES ! -s 10.244.0.0/16 -d 10.101.63.199/32 -p tcp -m comment --comment "default/web cluster IP" -m tcp --dport 80 -j KUBE-MARK-MASQ
#第二步
-A KUBE-SVC-LOLE4ISW44XBNF3G -m comment --comment "default/web" -j KUBE-SEP-NBPPORLROMQLLPIL
#第三步
-A KUBE-SERVICES -d 10.101.63.199/32 -p tcp -m comment --comment "default/web cluster IP" -m tcp --dport 80 -j KUBE-SVC-LOLE4ISW44XBNF3G
ipvs模式
修改为ipvs模式
kubeadm方式修改ipvs模式:
注:
kube-proxy配置文件以configmap方式存储
如果让所有节点生效,需要重建所有节点kube-proxy pod
修改配置文件删除Pod
[root@master ~]# kubectl edit configmaps -n kube-system kube-proxy
···
mode: "ipvs"
···
[root@master ~]# kubectl delete pod -n kube-system kube-proxy-628k6 kube-proxy-7sbfn kube-proxy-jzrxz
pod "kube-proxy-628k6" deleted
pod "kube-proxy-7sbfn" deleted
pod "kube-proxy-jzrxz" deleted
- 查看当前模式
[root@master ~]# kubectl logs -n kube-system kube-proxy-c8vzr | grep ipvs
I1103 09:20:08.747308 1 server_others.go:258] Using ipvs Proxier.
- 安装ipvs工具
[root@master ~]# yum install -y ipvsadm
[root@master ~]# ipvsadm -L -n
· -L 列出所有规则
· -n 以非主机名的方式去显示
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.1.104:32238 rr
-> 10.244.104.1:80 Masq 1 0 0
TCP 10.96.0.1:443 rr
-> 192.168.1.104:6443 Masq 1 0 0
TCP 10.96.0.10:53 rr
-> 10.244.219.66:53 Masq 1 0 0
-> 10.244.219.67:53 Masq 1 0 0
TCP 10.96.0.10:9153 rr
-> 10.244.219.66:9153 Masq 1 0 0
-> 10.244.219.67:9153 Masq 1 0 0
TCP 10.101.63.199:80 rr
-> 10.244.104.1:80 Masq 1 0 0
TCP 10.244.219.64:32238 rr
-> 10.244.104.1:80 Masq 1 0 0
TCP 127.0.0.1:32238 rr
-> 10.244.104.1:80 Masq 1 0 0
TCP 172.17.0.1:32238 rr
-> 10.244.104.1:80 Masq 1 0 0
UDP 10.96.0.10:53 rr
-> 10.244.219.66:53 Masq 1 0 0
-> 10.244.219.67:53 Masq 1 0 0
ipvs的访问流程:
curl service ip > kube-ipvs0(virtual server) > pod(real server)
通过内置的ipvs网卡实现负载的功能
IPVS模式下,ingress和service使用相同的ELB实例时,无法在集群内的节点和容器中访问ingress
9: kube-ipvs0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN group default
link/ether 2e:a3:33:75:1d:20 brd ff:ff:ff:ff:ff:ff
inet 10.96.0.1/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.96.0.10/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
inet 10.101.63.199/32 scope global kube-ipvs0
valid_lft forever preferred_lft forever
Iptables VS IPVS
Iptables:
灵活,功能强大
规则遍历匹配和更新,呈线性时延
IPVS:
工作在内核态,有更好的性能
调度算法丰富:rr,wrr,lc,wlc,ip hash..
# Service DNS名称
CoreDNS:
是一个DNS服务器,Kubernetes默认采用,以Pod部署在集群中,CoreDNS服 务监视Kubernetes API,为每一个Service创建DNS记录用于域名解析。
CoreDNS YAML文件:
[root@master ~]# wget https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dns/coredns
ClusterIP A记录格式:<service-name>.<namespace-name>.svc.cluster.local
示例:my-svc.my-namespace.svc.cluster.local
- 运行一个容器用来解析k8s集群的域名
[root@master ~]# kubectl run -it --rm --image=busybox:1.28.4 -- sh
If you don't see a command prompt, try pressing enter.
/ # nslookup
#查看一下service的custer—ip
[root@master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 4h58m
web NodePort 10.101.63.199 <none> 80:32238/TCP 59m
- 解析一下clusterip
BusyBox v1.28.4 (2018-05-22 17:00:17 UTC) multi-call binary.
Usage: nslookup [HOST] [SERVER]
Query the nameserver for the IP address of the given HOST
optionally using a specified DNS server
/ # nslookup 10.101.63.199
Server: 10.96.0.10
Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.local
Name: 10.101.63.199
Address 1: 10.101.63.199 web.default.svc.cluster.local
/ #
# Ingress
# Ingress为弥补NodePort不足而生
NodePort存在的不足:
一个端口只能一个服务使用,端口需提前规划
只支持4层负载均衡
Ingress是什么?
Ingress:Ingress公开了从集群外部到集群内服务的HTTP和HTTPS路由的规则集合,而具体实现流量路 由则是由Ingress Controller负责。
Ingress:K8s中的一个抽象资源,给管理员 提供一个暴露应用的入口定义方法
Ingress Controller:根据Ingress生成具体 的路由规则,并对Pod负载均衡器
Ingress是授权入站连接到达集群服务的规则集合。
- 从外部流量调度到nodeprot上的service
- 从service调度到ingress-controller
- ingress-controller根据ingress中的定义(虚拟主机或者后端的url)
- 根据虚拟主机名调度到后端的一组pod中
Ingress Controller是什么?
Ingress管理的负载均衡器,为集群提供全局的负载均衡能力。
使用流程:
- 部署Ingress Controller
- 创建Ingress规则
# Ingress Controller
Ingress 不会公开任意端口或协议。 将 HTTP 和 HTTPS 以外的服务公开到 Internet 时,通常使用 Service.Type=NodePort (opens new window) 或 Service.Type=LoadBalancer (opens new window) 类型的服务。
Ingress Controller有很多实现,我们这里采用官方维护的Nginx控制器。
项目地址:https://github.com/kubernetes/ingress-nginx
修改YAML:
镜像地址修改成国内的:lizhenliang/nginx-ingress-controller:0.30.0
将Ingress Controller暴露,一般使用宿主机网络(hostNetwork: true)或者使用NodePort
# 环境准备
你必须具有 Ingress 控制器 (opens new window) 才能满足 Ingress 的要求。 仅创建 Ingress 资源本身没有任何效果。
你可能需要部署 Ingress 控制器,例如 ingress-nginx (opens new window)。 你可以从许多 Ingress 控制器 (opens new window) 中进行选择。
理想情况下,所有 Ingress 控制器都应符合参考规范。但实际上,不同的 Ingress 控制器操作略有不同。
初始化Ingress的控制器
[root@master ~]# kubectl apply -f ingress-controller.yaml
namespace/ingress-nginx unchanged
configmap/nginx-configuration unchanged
configmap/tcp-services unchanged
configmap/udp-services unchanged
serviceaccount/nginx-ingress-serviceaccount unchanged
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRole is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRole
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole unchanged
Warning: rbac.authorization.k8s.io/v1beta1 Role is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 Role
role.rbac.authorization.k8s.io/nginx-ingress-role unchanged
Warning: rbac.authorization.k8s.io/v1beta1 RoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 RoleBinding
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding unchanged
Warning: rbac.authorization.k8s.io/v1beta1 ClusterRoleBinding is deprecated in v1.17+, unavailable in v1.22+; use rbac.authorization.k8s.io/v1 ClusterRoleBinding
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding unchanged
daemonset.apps/nginx-ingress-controller unchanged
limitrange/ingress-nginx configured
- 使用命名空间隔离资源,创建deployment和service
[root@master ~]# cat nginx.yaml
apiVersion: v1
kind: Namespace
metadata:
name: web-test-ns
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: web-test-ns
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
name: nginx
---
apiVersion: v1
kind: Service
metadata:
name: nginx
namespace: web-test-ns
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
type: NodePort
[root@master ~]# kubectl apply -f nginx.yaml
namespace/web-test-ns created
deployment.apps/nginx created
service/nginx created
[root@master ~]# kubectl get deployment -n web-test-ns
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 1/1 1 1 30m
[root@master ~]# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 8h
nginx NodePort 10.106.56.3 <none> 80:31410/TCP 30m
# Ingress
ingress可以给service提供集群外部访问的url、负载均衡、ssl终止、http路由等。
为了配置这些ingress规则,集群管理员需要部署一个ingress controller,它监听ingress和service的变化,根据规则配置负载均衡并提供访问入口
创建一个最小资源化的Ingress
[root@master ~]$ kubectl create ingress ingsecret --class=default --rule="/*=nginx:80" --dry-run -o yaml > ingress.yaml
[root@master ~]# cat ingress2.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-web
namespace: web-test-ns
spec:
rules: #创建一个规则
- host: test.com #指定主机的域名
http:
paths: #将请求映射到后端的路径集合
- path: /
pathType: Prefix
backend:
name: nginx #service的名称
port:
number: 80 #service的端口
- 创建ingress查看ingress的状态
[root@master ~]# kubectl apply -f ingress2.yaml
Warning: extensions/v1beta1 Ingress is deprecated in v1.14+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
ingress.extensions/ingress-web configured
[root@master ~]# kubectl get ingress -n web-test-ns
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-web <none> test.com 80 28m
- Ingress control的入口地址是什么?怎么访问呢?
[root@master ~]# kubectl get pods -A -o wide
NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
ingress-nginx nginx-ingress-controller-26w7m 1/1 Running 0 3h25m 192.168.1.121 node2 <none> <none>
ingress-nginx nginx-ingress-controller-9bm8d 1/1 Running 0 3h25m 192.168.1.120 node1 <none> <none>
···
web-test-ns nginx-7848d4b86f-v27pj 1/1 Running 0 39m 10.244.104.2 node2 <none> <none>
#192.168.1.121就是访问地址
- 在本地的hosts配置映射
C:\Windows\System32\drivers\etc
192.168.1.121 test.com
C:\Users\zyh>ping test.com
正在 Ping test.com [192.168.1.121] 具有 32 字节的数据:
来自 192.168.1.121 的回复: 字节=32 时间<1ms TTL=64
来自 192.168.1.121 的回复: 字节=32 时间<1ms TTL=64
然后可以在网站上用域名访问,可以访问到,然后查看两个node节点的端口开放
masetr节点的端口并没发现有80和443端口
[root@master ~]# ss -anpt | grep 80
LISTEN 0 128 192.168.1.104:2380 *:* users:(("etcd",pid=10511,fd=3))
ESTAB 0 0 127.0.0.1:2379 127.0.0.1:47360 users:(("etcd",pid=10511,fd=80))
ESTAB 0 0 127.0.0.1:2379 127.0.0.1:47280 users:(("etcd",pid=10511,fd=42))
TIME-WAIT 0 0 192.168.1.104:46336 10.244.219.69:8080
ESTAB 0 0 127.0.0.1:47280 127.0.0.1:2379 users:(("kube-apiserver",pid=10492,fd=34))
TIME-WAIT 0 0 192.168.1.104:46254 10.244.219.69:8080
ESTAB 0 0 127.0.0.1:47376 127.0.0.1:2379 users:(("kube-apiserver",pid=10492,fd=80))
[root@master ~]# ss -anpt | grep 443
ESTAB 0 0 192.168.1.104:49890 192.168.1.104:6443 users:(("kubelet",pid=906,fd=45))
ESTAB 0 0 10.96.0.1:33866 10.96.0.1:443 users:(("calico-node",pid=13259,fd=7))
ESTAB 0 0 192.168.1.104:50300 192.168.1.104:6443 users:(("kube-controller",pid=10462,fd=14))
ESTAB 0 0 192.168.1.104:50150 192.168.1.104:6443 users:(("kube-proxy",pid=12370,fd=12))
ESTAB 0 0 10.96.0.1:33860 10.96.0.1:443 users:(("calico-node",pid=13257,fd=8))
ESTAB 0 0 192.168.1.104:49906 192.168.1.104:6443 users:(("kube-controller",pid=10462,fd=9))
ESTAB 0 0 192.168.1.104:50118 192.168.1.104:6443 users:(("kube-scheduler",pid=10479,fd=10))
ESTAB 0 0 10.96.0.1:33864 10.96.0.1:443 users:(("calico-node",pid=13261,fd=7))
ESTAB 0 0 192.168.1.104:49888 192.168.1.104:6443 users:(("kube-scheduler",pid=10479,fd=9))
此时查看两个node节点发现80端口和443端口开放
说明负载的作用成功了,ingress正常
[root@node1 ~]# ss -anpt | grep 80
LISTEN 0 128 *:80 *:* users:(("nginx",pid=20388,fd=33),("nginx",pid=13765,fd=33))
LISTEN 0 128 *:80 *:* users:(("nginx",pid=20388,fd=32),("nginx",pid=13764,fd=32))
LISTEN 0 128 *:80 *:* users:(("nginx",pid=20388,fd=31),("nginx",pid=13763,fd=31))
LISTEN 0 128 *:80 *:* users:(("nginx",pid=20388,fd=23),("nginx",pid=13762,fd=23))
TIME-WAIT 0 0 127.0.0.1:36780 127.0.0.1:9099
TIME-WAIT 0 0 127.0.0.1:36804 127.0.0.1:9099
TIME-WAIT 0 0 127.0.0.1:36800 127.0.0.1:9099
[root@node1 ~]# ss -anpt | grep 443
LISTEN 0 128 *:443 *:* users:(("nginx",pid=20388,fd=39),("nginx",pid=13765,fd=39))
LISTEN 0 128 *:443 *:* users:(("nginx",pid=20388,fd=38),("nginx",pid=13764,fd=38))
LISTEN 0 128 *:443 *:* users:(("nginx",pid=20388,fd=37),("nginx",pid=13763,fd=37))
LISTEN 0 128 *:443 *:* users:(("nginx",pid=20388,fd=25),("nginx",pid=13762,fd=25))
ESTAB 0 0 10.96.0.1:54834 10.96.0.1:443 users:(("calico-node",pid=14921,fd=9))
ESTAB 0 0 10.96.0.1:54928 10.96.0.1:443 users:(("nginx-ingress-c",pid=20287,fd=24))
ESTAB 0 0 192.168.1.120:44100 192.168.1.104:6443 users:(("kubelet",pid=13313,fd=8))
ESTAB 0 0 10.96.0.1:54820 10.96.0.1:443 users:(("calico-node",pid=14916,fd=7))
ESTAB 0 0 10.96.0.1:54822 10.96.0.1:443 users:(("calico-node",pid=14918,fd=7))
ESTAB 0 0 192.168.1.120:44166 192.168.1.104:6443 users:(("kube-proxy",pid=8637,fd=12))
# Ingress配置https的加密认证
Ingress | Https
配置HTTPS步骤:
准备域名证书文件(来自:openssl/cfssl工具自签或者权威机构颁发)
[root@master ~]# tar -zxvf cfssl.tar.gz -C /usr/local/bin/ [root@master ~]# chmod +x certs.sh [root@master ~]# ./certs.sh 2021/11/04 11:21:17 [INFO] generating a new CA key and certificate from CSR 2021/11/04 11:21:17 [INFO] generate received request 2021/11/04 11:21:17 [INFO] received CSR 2021/11/04 11:21:17 [INFO] generating key: rsa-2048 2021/11/04 11:21:18 [INFO] encoded CSR 2021/11/04 11:21:18 [INFO] signed certificate with serial number 122465585755296665714458411579230792812069987507 2021/11/04 11:21:18 [INFO] generate received request 2021/11/04 11:21:18 [INFO] received CSR 2021/11/04 11:21:18 [INFO] generating key: rsa-2048 2021/11/04 11:21:18 [INFO] encoded CSR 2021/11/04 11:21:18 [INFO] signed certificate with serial number 552651333045426922300417138573432468469848830218 2021/11/04 11:21:18 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for websites. For more information see the Baseline Requirements for the Issuance and Management of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org); specifically, section 10.2.3 ("Information Requirements").
将证书文件保存到Secret kubectl create secret tls web-aliangedu-cn -- cert=web.aliangedu.cn.pem --key=web.aliangedu.cn-key.pem
[root@master ~]# kubectl create secret tls web-aliangedu-cn --cert=web.aliangedu.cn.pem --key=web.aliangedu.cn-key.pem
Ingress规则配置tls
[root@master ~]# cat ingress.yaml apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: aliangedu-https namespace: web-test-ns spec: tls: #TLS配置 - hosts: #tls里包含的主机列表 - web.aliangedu.cn secretName: web-aliangedu-cn #TLS通信的密码的名称 rules: - host: web.aliangedu.cn http: paths: - path: / pathType: Prefix backend: service: name: nginx port: number: 80 [root@master work]# kubectl apply -f ingress.yaml
查看ingress,配置本机的域名映射就可以访问
[root@master work]# kubectl get pods -A -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default web-5b87848979-l6zjz 1/1 Running 0 9h 10.244.104.10 node2 <none> <none> ingress-nginx nginx-ingress-controller-26w7m 1/1 Running 0 17h 192.168.1.121 node2 <none> <none> ingress-nginx nginx-ingress-controller-9bm8d 1/1 Running 0 17h 192.168.1.120 node1 <none> <none> ··· web-test-ns nginx-6799fc88d8-6ccsk 1/1 Running 0 24m 10.244.104.11 node2 <none> <none> [root@master work]# kubectl get ingress -n web-test-ns NAME CLASS HOSTS ADDRESS PORTS AGE aliangedu-https <none> web.aliangedu.cn 80, 443 24m