karmada 安装

安装karmada

helm chart安装

helm https://github.com/karmada-io/karmada/tree/release-1.3/charts

1
2
3
4
5
6
7
8
9
10
11
12
13
bitnami/kubectl:latest
cfssl/cfssl
k8s.gcr.io/karmada/etcd:3.5.3-0
k8s.gcr.io/karmada/kube-apiserver:v1.24.2
k8s.gcr.io/karmada/kube-controller-manager:v1.24.2
swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-aggregated-apiserver:latest
swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-controller-manager:latest
swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-scheduler:latest
swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-webhook:latest
swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-search:latest
swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-descheduler:latest
swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-scheduler-estimator:latest
swr.ap-southeast-1.myhuaweicloud.com/karmada/karmada-agent:latest

karmada.yaml 修改了镜像地址

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
etcd:
  internal:
    image:
      repository: karmada/etcd
kubeControllerManager:
  image:
    repository: karmada/kube-controller-manager
apiServer:
  image:
    repository: karmada/kube-apiserver


search:
  image:
    repository: karmada/karmada-search
descheduler:
  image:
    repository: karmada/karmada-descheduler
schedulerEstimator:
  image:
    repository: karmada/karmada-scheduler-estimator
agent:
  image:
    repository: karmada/karmada-agent
aggregatedApiServer:
  image:
    repository: karmada/karmada-aggregated-apiserver
controllerManager:
  image:
    repository: karmada/karmada-controller-manager
webhook:
  image:
    repository: karmada/karmada-webhook
scheduler:
  image:
    repository: karmada/karmada-scheduler
    
certs:
  auto:
    hosts: [
      "kubernetes.default.svc",
      "*.etcd..svc.",
      "*..svc.",
      "*..svc",
      "localhost",
      "127.0.0.1",
      "host-server-ip"  # 修改成host集群IP,不然agent因为证书问题连不上host集群
    ]

关于helm chart安装证书问题 pull agent连不上host集群

下载dependencies

1
helm dependency update karmada

会在helm chart目录下面生成charts目录下载对应的dependencies tgz包

安装

1
2
3
helm package karmada

helm install karmada -n karmada-system --create-namespace karmada-0.0.3.tgz -f karmada.yaml
1
2
3
4
5
6
7
8
9
10
kubectl get pods -n karmada-system

NAME                                              READY   STATUS    RESTARTS        AGE
etcd-0                                            1/1     Running   0               4m49s
karmada-aggregated-apiserver-79b7b5654c-8rw8j     1/1     Running   2 (4m15s ago)   4m49s
karmada-apiserver-9d86c4949-b488p                 1/1     Running   0               4m49s
karmada-controller-manager-778c7b7b56-6qrn2       1/1     Running   2 (4m45s ago)   4m49s
karmada-kube-controller-manager-6dd5bdc55-7z5h9   1/1     Running   2 (4m14s ago)   4m49s
karmada-scheduler-d6b87bcf9-h74bn                 1/1     Running   0               4m49s
karmada-webhook-68dd9586c9-ntb9r                  1/1     Running   2 (4m44s ago)   4m49s

卸载

1
2
3
4
5
6
helm uninstall karmada -n karmada-system

kubectl delete sa/karmada-pre-job -n karmada-system
kubectl delete clusterRole/karmada-pre-job
kubectl delete clusterRoleBinding/karmada-pre-job
kubectl delete ns karmada-system

kubectl-karmada 安装

https://github.com/karmada-io/karmada/releases

1
2
3
4
5
tar -zxf kubectl-karmada-linux-amd64.tgz

which kubectl
cp kubectl-karmada /usr/local/bin

move kubectl-karmada executable file to PATH path

1
2
3
kubectl karmada version

kubectl karmada version: version.Info{GitVersion:"v1.2.1", GitCommit:"de4972b74f848f78a58f9a0f4a4e85f243ba48f8", GitTreeState:"clean", BuildDate:"2022-07-14T09:33:32Z", GoVersion:"go1.17.11", Compiler:"gc", Platform:"linux/amd64"}

install agent

host 集群上

1
2
# 获取karmada config
kubectl get secret -n karmada-system karmada-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > karmada-apiserver.config

karmada-apiserver.config 修改server成https://host-server-ip:5443

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
kind: Config
clusters:
  - cluster:
      certificate-authority-data: xxxx
      insecure-skip-tls-verify: false
      server: https://karmada-apiserver.karmada-system.svc.cluster.local:5443
    name: karmada-apiserver
users:
  - user:
      client-certificate-data: xxxx
      client-key-data: xxxx
    name: karmada-apiserver
contexts:
  - context:
      cluster: karmada-apiserver
      user: karmada-apiserver
    name: karmada-apiserver
current-context: karmada-apiserver

agent集群上

agent.yaml 根据karmada-apiserver.config填写证书信息需要base64 decode,根据不同的worker集群修改clusterName

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
installMode: "agent"
agent:
  image:
    repository: karmada/karmada-agent
  clusterName: "member"
  kubeconfig:
    caCrt: |
      -----BEGIN CERTIFICATE-----
      XXXXXXXXXXXXXXXXXXXXXXXXXXX
      -----END CERTIFICATE-----
    crt: |
      -----BEGIN CERTIFICATE-----
      XXXXXXXXXXXXXXXXXXXXXXXXXXX
      -----END CERTIFICATE-----
    key: |
      -----BEGIN RSA PRIVATE KEY-----
      XXXXXXXXXXXXXXXXXXXXXXXXXXX
      -----END RSA PRIVATE KEY-----
    server: "https://host-server-ip:5443"

安装

1
2
3
4
5
# 安装
helm install karmada-agent -n karmada-system --create-namespace  karmada-0.0.3.tgz -f agent.yaml

# 卸载
helm uninstall karmada-agent -n karmada-system

测试

worker集群

1
2
3
4
kubectl get pods -n karmada-system

NAME                             READY   STATUS    RESTARTS   AGE
karmada-agent-86864d7d8b-kfmk9   1/1     Running   0          20s

host集群

1
2
3
4
kubectl get cluster --kubeconfig karmada-apiserver.config

NAME      VERSION        MODE   READY   AGE
member1   v1.19.5+k3s2   Pull   True    4m39s

安装ANP

anp安装参考 https://karmada.io/docs/userguide/clustermanager/working-with-anp

不然会出现错误,参考这个

1
2
3
karmadactl get pod --kubeconfig karmada-apiserver.config

Error: [cluster(member1) is inaccessible, please check authorization or network, cluster(member2) is inaccessible, please check authorization or network]

安装ANP为了,让以pull模式加入member集群和Karmada control plane的网络互通,这样才能让karmada-aggregated-apiserver就可以访问到member集群,方便用户通过karmada来访问成员集群。

构建proxy镜像和证书

1
2
3
4
5
6
7
8
9
10
11
# 下载代码
git clone -b v0.0.24/dev https://github.com/mrlihanbo/apiserver-network-proxy.git
cd apiserver-network-proxy/

# 构建,然后push
docker build . --build-arg ARCH=amd64 -f artifacts/images/agent-build.Dockerfile -t karmada/proxy-agent:0.0.24
docker build . --build-arg ARCH=amd64 -f artifacts/images/server-build.Dockerfile -t karmada/proxy-server:0.0.24


# 创建证书,在certs目录,把IP放到证书的SAN里面
make certs PROXY_SERVER_IP=x.x.x.x

部署proxy

server

proxy-server.yaml 放到项目根目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
apiVersion: apps/v1
kind: Deployment
metadata:
  name: proxy-server
  namespace: karmada-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: proxy-server
  template:
    metadata:
      labels:
        app: proxy-server
    spec:
      containers:
      - command:
        - /proxy-server
        args:
          - --health-port=8092
          - --cluster-ca-cert=/var/certs/server/cluster-ca-cert.crt
          - --cluster-cert=/var/certs/server/cluster-cert.crt 
          - --cluster-key=/var/certs/server/cluster-key.key
          - --mode=http-connect 
          - --proxy-strategies=destHost 
          - --server-ca-cert=/var/certs/server/server-ca-cert.crt
          - --server-cert=/var/certs/server/server-cert.crt 
          - --server-key=/var/certs/server/server-key.key
        image: karmada/proxy-server:0.0.24
        imagePullPolicy: IfNotPresent
        livenessProbe:
          failureThreshold: 3
          httpGet:
            path: /healthz
            port: 8092
            scheme: HTTP
          initialDelaySeconds: 10
          periodSeconds: 10
          successThreshold: 1
          timeoutSeconds: 60
        name: proxy-server
        volumeMounts:
        - mountPath: /var/certs/server
          name: cert
      restartPolicy: Always
      hostNetwork: true
      volumes:
      - name: cert
        secret:
          secretName: proxy-server-cert
---
apiVersion: v1
kind: Secret
metadata:
  name: proxy-server-cert
  namespace: karmada-system
type: Opaque
data:
  server-ca-cert.crt: |
    
  server-cert.crt: |
    
  server-key.key: |
    
  cluster-ca-cert.crt: |
    
  cluster-cert.crt: |
    
  cluster-key.key: |
    

replace-proxy-server.sh 证书替换脚本

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#!/bin/bash

cert_yaml=proxy-server.yaml

SERVER_CA_CERT=$(cat certs/frontend/issued/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s//${SERVER_CA_CERT}/g" ${cert_yaml}

SERVER_CERT=$(cat certs/frontend/issued/proxy-frontend.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s//${SERVER_CERT}/g" ${cert_yaml}

SERVER_KEY=$(cat certs/frontend/private/proxy-frontend.key | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s//${SERVER_KEY}/g" ${cert_yaml}

CLUSTER_CA_CERT=$(cat certs/agent/issued/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s//${CLUSTER_CA_CERT}/g" ${cert_yaml}

CLUSTER_CERT=$(cat certs/agent/issued/proxy-frontend.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s//${CLUSTER_CERT}/g" ${cert_yaml}


CLUSTER_KEY=$(cat certs/agent/private/proxy-frontend.key | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s//${CLUSTER_KEY}/g" ${cert_yaml}

部署到host集群

1
kubectl apply -f proxy-server.yaml

agent

proxy-agent.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
apiVersion: apps/v1
kind: Deployment
metadata:
  labels:
    app: proxy-agent
  name: proxy-agent
  namespace: karmada-system
spec:
  replicas: 1
  selector:
    matchLabels:
      app: proxy-agent
  template:
    metadata:
      labels:
        app: proxy-agent
    spec:
      containers:
        - command:
            - /proxy-agent
          args:
            - '--ca-cert=/var/certs/agent/ca.crt'
            - '--agent-cert=/var/certs/agent/proxy-agent.crt'
            - '--agent-key=/var/certs/agent/proxy-agent.key'
            - '--proxy-server-host='
            - '--proxy-server-port=8091'
            - '--agent-identifiers=host='
          image: karmada/proxy-agent:0.0.24
          imagePullPolicy: IfNotPresent
          name: proxy-agent
          livenessProbe:
            httpGet:
              scheme: HTTP
              port: 8093
              path: /healthz
            initialDelaySeconds: 15
            timeoutSeconds: 60
          volumeMounts:
            - mountPath: /var/certs/agent
              name: cert
      volumes:
        - name: cert
          secret:
            secretName: proxy-agent-cert
---
apiVersion: v1
kind: Secret
metadata:
  name: proxy-agent-cert
  namespace: karmada-system
type: Opaque
data:
  ca.crt: |
    
  proxy-agent.crt: |
    
  proxy-agent.key: |
    

replace-proxy-agent.sh

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#!/bin/bash

cert_yaml=proxy-agent.yaml

karmada_controlplan_addr=$1
member3_cluster_addr=$2
sed -i'' -e "s//${karmada_controlplan_addr}/g" ${cert_yaml}
sed -i'' -e "s//${member3_cluster_addr}/g" ${cert_yaml}

PROXY_AGENT_CA_CRT=$(cat certs/agent/issued/ca.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s//${PROXY_AGENT_CA_CRT}/g" ${cert_yaml}

PROXY_AGENT_CRT=$(cat certs/agent/issued/proxy-agent.crt | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s//${PROXY_AGENT_CRT}/g" ${cert_yaml}

PROXY_AGENT_KEY=$(cat certs/agent/private/proxy-agent.key | base64 | tr "\n" " "|sed s/[[:space:]]//g)
sed -i'' -e "s//${PROXY_AGENT_KEY}/g" ${cert_yaml}

运行脚本

1
2
chmod +x replace-proxy-agent.sh
bash replace-proxy-agent.sh proxy_ip member_cluster_ip

修改karmada-agent deployment

1
kubectl edit deploy karmada-agent -n karmada-system

添加

  • –cluster-api-endpoint k8s集群的apiendpoint,可以参考集群的kubeconfig
  • –proxy-server-address http://:8088 , 代理服务访问到proxy sever
1
2
3
4
5
6
7
8
9
10
11
12
......

      containers:
      - command:
        - /bin/karmada-agent
        - --karmada-kubeconfig=/etc/kubeconfig/kubeconfig
        - --cluster-name=member1
        - --cluster-api-endpoint=https://<member ip>:6443  # add newline
        - --proxy-server-address=http://<proxy server ip>:8088  # add newline
        - --cluster-status-update-frequency=10s
        - --v=4
......

8088可以通过源码修改

https://github.com/mrlihanbo/apiserver-network-proxy/blob/v0.0.24/dev/cmd/server/app/server.go#L267.

测试karmada部署

1
2
3
4
kubectl get cluster --kubeconfig karmada-apiserver.config
NAME      VERSION         MODE   READY   AGE
member1   v1.22.10+k3s1   Pull   True    6h16m
member2   v1.22.10+k3s1   Pull   True    156m

nginx.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
---

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: nginx-propagation
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2
    replicaScheduling:
      replicaDivisionPreference: Weighted
      replicaSchedulingType: Divided
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - member1
            weight: 1
          - targetCluster:
              clusterNames:
                - member2
            weight: 2

部署

1
1
kubectl apply -f nginx.yaml  --kubeconfig karmada-apiserver.config

验证pod分发

1
2
1
2
kubectl get pods -A --kubeconfig member1-config
kubectl get pods -A --kubeconfig member2-config

结果(检验ANP)

1
2
3
4
5
1
2
3
4
5
karmadactl get pod --kubeconfig karmada-apiserver.config
NAME                     CLUSTER   READY   STATUS    RESTARTS   AGE
nginx-6799fc88d8-f4mk6   member1   1/1     Running   0          142m
nginx-6799fc88d8-hcbz6   member1   1/1     Running   0          142m
nginx-6799fc88d8-wxghk   member2   1/1     Running   0          142m

Submariner

Karmada使用 Submariner 连接成员集群之间的网络,Submariner将连接的集群之间的网络扁平化,并实现 Pod 和服务之间的 IP 可达性,与网络插件 (CNI) 无关。

member集群之间的Pod CIDR 和 Service CIDR必须不一样

helm安装

参考https://submariner.io/operations/deployment/helm/

安装准备

准备charts

1
2
3
4
5
6
7
8
9
10
11
12
13
helm repo add submariner-latest https://submariner-io.github.io/submariner-charts/charts

helm repo update

# list charts version
helm search repo submariner-latest/submariner-k8s-broker --versions

helm search repo submariner-latest/submariner-k8s-broker --versions
helm search repo submariner-latest/submariner-operator --versions

# 下载指定version chart
helm fetch submariner-latest/submariner-k8s-broker --version xxxx
helm fetch submariner-latest/submariner-operator

准备镜像

目前还没官方方法支持离线安装,准备需要到镜像

1
2
3
4
5
6
7
8
9
10
11
#SUBMARINER_VER=$(subctl version | cut -d: -f2 | cut -dv -f2)
SUBMARINER_VER=0.12.2
repo=xxxx

for i in submariner-operator submariner-route-agent submariner-globalnet submariner-gateway submariner-networkplugin-syncer submariner-operator-index lighthouse-coredns lighthouse-agent nettest
do
  docker pull quay.io/submariner/${i}:${SUBMARINER_VER}
  docker tag quay.io/submariner/${i}:${SUBMARINER_VER} ${repo}/submariner/${i}:${SUBMARINER_VER}
  docker push ${repo}/submariner/${i}:${SUBMARINER_VER}
done

安装subctl

https://github.com/submariner-io/releases/releases 下载

在host集群

1
2
cp subctl-vxx.xx-linux-amd64 /usr/local/bin/subctl
chmod +x /usr/local/bin/subctl

配置member集群节点

member集群其中一个节点配置就可以了

1
2
kubectl label nodes xxx submariner.io/gateway=true
kubectl annotate node xxx gateway.submariner.io/public-ip=ipv4:1.2.3.4

参考

  • https://github.com/submariner-io/submariner/issues/1926#issuecomment-1188216877
  • https://submariner.io/operations/nat-traversal/
  • https://github.com/submariner-io/submariner/issues/1649

可能遇到的错误

不配label gateway

1
2
3
4
5
6
7
8
subctl diagnose all

✗ Checking gateway connections
✗ There are no gateways detected
 
 
✗ Checking Submariner support for the kube-proxy mode 
✗ Error spawning the network pod: timed out waiting for the condition

没有gateway这个pod

不配pubilc ip gateway会报错

1
2
3
4
E0106 06:33:10.090003 1 public_ip.go:80] Error resolving public IP with resolver api:api.ipify.org: retrieving public IP from https://api.ipify.org: Get "https://api.ipify.org": dial tcp 54.91.59.199:443: i/o timeout.
E0106 06:33:40.090557 1 public_ip.go:80] Error resolving public IP with resolver api:api.my-ip.io/ip: retrieving public IP from https://api.my-ip.io/ip: Get "https://api.my-ip.io/ip": dial tcp 161.35.189.70:443: i/o timeout
E0106 06:34:10.090773 1 public_ip.go:80] Error resolving public IP with resolver api:ip4.seeip.org: retrieving public IP from https://ip4.seeip.org: Get "https://ip4.seeip.org": dial tcp 23.128.64.141:443: i/o timeout
F0106 06:34:10.090872 1 main.go:134] Error creating local endpoint object from types.SubmarinerSpecification{ClusterCidr:[]string{"10.244.0.0/16"}, ColorCodes:[]string{"blue"}, GlobalCidr:[]string{}, ServiceCidr:[]string{"10.10.0.0/16"}, Broker:"k8s", CableDriver:"libreswan", ClusterID:"cluster-b", Namespace:"submariner-operator", PublicIP:"", Token:"", Debug:false, NATEnabled:false, HealthCheckEnabled:true, HealthCheckInterval:0x1, HealthCheckMaxPacketLossCount:0x5}: could not determine public IP: Unable to resolve public IP by any of the resolver methods: [api:api.ipify.org api:api.my-ip.io/ip api:ip4.seeip.org]

host 集群

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 安装
helm install submariner-broker submariner-k8s-broker-0.12.2.tgz --create-namespace -n submariner-broker

NAME: submariner-broker
LAST DEPLOYED: Tue Aug 16 10:13:32 2022
NAMESPACE: submariner-broker
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
The Submariner Kubernetes Broker is now setup.

You can retrieve the server URL by running

  $ SUBMARINER_BROKER_URL=$(kubectl -n default get endpoints kubernetes -o jsonpath="{.subsets[0].addresses[0].ip}:{.subsets[0].ports[?(@.name=='https')].port}")

The broker client token and CA can be retrieved by running

  $ SUBMARINER_BROKER_CA=$(kubectl -n submariner-broker get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-broker-submariner-k8s-broker-client')].data['ca\.crt']}")
  $ SUBMARINER_BROKER_TOKEN=$(kubectl -n submariner-broker get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-broker-submariner-k8s-broker-client')].data.token}"|base64 --decode)

member集群 安装 operator

1
2
3
SUBMARINER_PSK=$(LC_CTYPE=C tr -dc 'a-zA-Z0-9' < /dev/urandom | fold -w 64 | head -n 1)

6jo7djPJww0KoFm0ESuTzqRZSrUmi4oCnNiykmk0p6yDEQA4agEpZTeqfT3wmib2

values.yaml clusterId,clusterCidr,serviceCidr根据member集群实际情况填写,k3s可以cat /etc/systemd/system/k3s.service

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
submariner:
  clusterId: "member1"  #必填
  clusterCidr: "" #必填
  serviceCidr: "" #必填
  globalCidr: ""
  images:
    repository: harbor.xxx.cn:20000/submariner
    tag: "0.12.2"
broker:
  server: "${SUBMARINER_BROKER_URL}" # ip:6443
  token: "${SUBMARINER_BROKER_TOKEN}"
  namespace: "submariner-broker"
  ca: "${SUBMARINER_BROKER_CA}"
	globalnet: false  # 开启globalnet
ipsec:
  psk: "${SUBMARINER_PSK}"
operator:
  image:
    repository: harbor.xxx.cn:20000/submariner/submariner-operator
    tag: "0.12.2"
gateway:
  image:
    repository: harbor.xxx.cn:20000/submariner/submariner-gateway
    tag: "0.12.2"


1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
# 安装
helm install submariner-operator submariner-operator-0.12.2.tgz --create-namespace -n submariner-operator -f values.yaml

# 验证
kubectl get pods -n submariner-operator

NAME                                            READY   STATUS    RESTARTS   AGE
submariner-gateway-zvps6                        1/1     Running   0          17h
submariner-lighthouse-agent-6dd4bc9b4b-ft6cq    1/1     Running   0          22h
submariner-lighthouse-coredns-8bf8fccdb-92kpd   1/1     Running   0          22h
submariner-lighthouse-coredns-8bf8fccdb-gxpxs   1/1     Running   0          22h
submariner-operator-d4cd68cb9-4qdv5             1/1     Running   1          22h
submariner-routeagent-8mwrh                     1/1     Running   0          22h


# 使用globalnet的话
kubectl get pods -n submariner-operator
NAME                                            READY   STATUS              RESTARTS     AGE
submariner-gateway-bg6df                        1/1     Running             0            3s
submariner-globalnet-dpfrz                      1/1     Running             0            2s
submariner-lighthouse-agent-5dd6dc54f-89ddb     0/1     ContainerCreating   0            2s
submariner-lighthouse-coredns-8bf8fccdb-8chpt   0/1     ContainerCreating   0            1s
submariner-lighthouse-coredns-8bf8fccdb-gpq4m   0/1     ContainerCreating   0            1s
submariner-operator-d4cd68cb9-rml8w             1/1     Running             1 (7s ago)   11s
submariner-routeagent-6ztvl                     1/1     Running             0            2s

验证

可以把member集群的kubeconfig聚集到host集群,修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: xxxx
    server: https://xxx:6443  # 修改对应IP
  name: default
contexts:
- context:
    cluster: default
    user: default
  name: member1  # 修改 member name
current-context: member1  # 修改 member name
kind: Config
preferences: {}
users:
- name: default
  user:
    client-certificate-data: xxxx
    client-key-data: xxx

检查安装配置排查原因

1
2
3
4
5
export KUBECONFIG=member1-config  # 可以省下--kubeconfig 


subctl show all --kubeconfig ${member_KUBE_CONFIG}
subctl diagnose all --kubeconfig ${member_KUBE_CONFIG}

验证正常member集群A, B直接pod service可以互相访问

pod之间可以直接访问

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
kubectl --kubeconfig member1-config get pod -owide

NAME                     READY   STATUS    RESTARTS      AGE     IP           NODE                                          NOMINATED NODE   READINESS GATES
nginx-6799fc88d8-b7xmw   1/1     Running   1 (45h ago)   7d3h    10.44.0.6    server-605ae265-df2b-4e91-9efa-19069d84f2d0   <none>           <none>
serve-59994d98f6-2lkmp   1/1     Running   0             3h42m   10.44.0.40   server-605ae265-df2b-4e91-9efa-19069d84f2d0   <none>           <none>

kubectl --kubeconfig member2-config run tmp-shell --rm -i --tty --image submariner/nettest:0.12.2 -- /bin/bash
If you don't see a command prompt, try pressing enter.
bash-5.1# curl 10.44.0.6
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;

service靠service export

Deploy ClusterIP Service

1
2
3
kubectl --kubeconfig kubeconfig.cluster-b create deployment nginx --image=nginx
kubectl --kubeconfig kubeconfig.cluster-b expose deployment nginx --port=80
subctl export service --kubeconfig kubeconfig.cluster-b --namespace default nginx

Verify

1
2
3
kubectl --kubeconfig kubeconfig.cluster-a -n default run tmp-shell --rm -i --tty --image quay.io/submariner/nettest:0.12.2 -- /bin/bash
curl nginx.default.svc.clusterset.local

原理

根据submariner官方文档,Lighthouse Agent会自动做这件事

image-20220905144757719

把上面内容放到mcs.yaml

1
kubectl --kubeconfig karmada-apiserver.config apply -f mcs.yaml

完成后在member2看到以<service-name>-<service-namespace>-<cluster-id>的ServiceImport,可以通过这个svc连接到member1

1
2
3
kubectl --kubeconfig member2-config get svcim -A
NAMESPACE             NAME                    TYPE           IP                  AGE
submariner-operator   serve-default-member1   ClusterSetIP   ["10.45.84.26"]     13m

根据Lighthouse DNS Server工作原理需要用到通过clusterset.local进行DNS转发,所以加入dnsConfig更加符合实际使用情况。

mcs_test.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Pod
metadata:
  name: dnsutils
spec:
  containers:
  - name: dnsutils
    image: quay.io/submariner/nettest:0.12.2
    imagePullPolicy: IfNotPresent
    command: ["sleep","3600"]
  dnsConfig:
    searches:
      - svc.clusterset.local

验证

1
2
3
4
5
kubectl --kubeconfig member2-config  apply -f mcs_test.yaml

kubectl --kubeconfig member2-config exec -it dnsutils -- /bin/bash
bash-5.1# curl serve.default
'hello from cluster 1 (Node: server-605ae265-df2b-4e91-9efa-19069d84f2d0 Pod: serve-59994d98f6-4f2t8 Address: 10.44.0.29)'

Multi-cluster Service Discovery(MCS)

部署MCS程序

前提是调试好Submariner

image-20220915102433713

install crd

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
  name: serviceexport-policy
spec:
  resourceSelectors:
    - apiVersion: apiextensions.k8s.io/v1
      kind: CustomResourceDefinition
      name: serviceexports.multicluster.x-k8s.io
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2
---        

apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
  name: serviceimport-policy
spec:
  resourceSelectors:
    - apiVersion: apiextensions.k8s.io/v1
      kind: CustomResourceDefinition
      name: serviceimports.multicluster.x-k8s.io
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2

把应用下发到member1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59

---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: serve
spec:
  replicas: 1
  selector:
    matchLabels:
      app: serve
  template:
    metadata:
      labels:
        app: serve
    spec:
      containers:
      - name: serve
        image: jeremyot/serve:0a40de8
        args:
        - "--message='hello from cluster 1 (Node:  Pod:  Address: )'"
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
---      
apiVersion: v1
kind: Service
metadata:
  name: serve
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: serve
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: mcs-workload
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: serve
    - apiVersion: v1
      kind: Service
      name: serve
  placement:
    clusterAffinity:
      clusterNames:
        - member1

把ServiceExport下发到member1

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
---
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
  name: serve
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: serve-export-policy
spec:
  resourceSelectors:
    - apiVersion: multicluster.x-k8s.io/v1alpha1
      kind: ServiceExport
      name: serve
  placement:
    clusterAffinity:
      clusterNames:
        - member1

把ServiceImport下发到member2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
  name: serve
spec:
  type: ClusterSetIP
  ports:
  - port: 80
    protocol: TCP
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: serve-import-policy
spec:
  resourceSelectors:
    - apiVersion: multicluster.x-k8s.io/v1alpha1
      kind: ServiceImport
      name: serve
  placement:
    clusterAffinity:
      clusterNames:
        - member2

部署

1
kubectl --kubeconfig karmada-apiserver.config apply -f mcs.yaml

可以看到derived-前缀的svc在member2,imported-前缀的endpointSlice

1
2
3
4
5
6
7
8
kubectl --kubeconfig member2-config get svc
NAME            TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
derived-serve   ClusterIP   10.47.99.218   <none>        80/TCP    3m53s


kubectl --kubeconfig member2-config get endpointSlice
NAME                           ADDRESSTYPE   PORTS   ENDPOINTS      AGE
imported-member1-serve-pz5fv   IPv4          8080    10.44.0.27     5s

测试MCS

1
2
3
4
1
2
3
4
kubectl --kubeconfig member2-config run tmp-shell --rm -i --tty --image submariner/nettest:0.12.2 -- /bin/bash

curl derived-serve
'hello from cluster 1 (Node: server-605ae265-df2b-4e91-9efa-19069d84f2d0 Pod: serve-59994d98f6-4f2t8 Address: 10.44.0.7)'

Multi-cluster Ingress(MCI)

https://karmada.io/docs/userguide/service/multi-cluster-ingress

移除traefik

k3s 默认安装traefik移除参考

1
2
# 停止k3s服务
systemctl stop k3s

编辑服务文件vim /etc/systemd/system/k3s.service并将此行添加到ExecStart

1
--disable traefik \

重启服务

1
2
systemctl daemon-reload
systemctl start k3s

multi-cluster-ingress

https://karmada.io/docs/userguide/service/multi-cluster-ingress

build mci image

根据karmada官网fork来自ingress controller-v1.1.1

1
git clone https://github.com/karmada-io/multi-cluster-ingress-nginx.git

在根目录,运行构建ingress-controller/controller:1.0.0-dev参考build/dev-env.sh文件

1
2
3
4
5
6
7
export TAG=1.0.0-dev
export REGISTRY=${REGISTRY:-ingress-controller}

DEV_IMAGE=${REGISTRY}/controller:${TAG}

make build image
docker tag "${REGISTRY}/controller:${TAG}" "${DEV_IMAGE}"

部署

image-20220915102349202

ingress_values.yaml ,替换controller的image

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
controller:
  name: controller
  image:
    registry: "harbor.xxx.cn:xxx"
    image: ingress-nginx/multi-cluster-controller  # 替换
    tag: "v1.1.1"
    digest: ""
  admissionWebhooks:
    patch:
      image:
        registry: "harbor.xxx.cn:xxx"
        image: ingress-nginx/kube-webhook-certgen
        tag: v1.1.1
        digest: ""
  config:
    worker-processes: "1"
  podLabels:
    deploy-date: "1662515412"
  updateStrategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  hostPort:
    enabled: true
  terminationGracePeriodSeconds: 0
  service:
    type: NodePort

defaultBackend:
  image:
    registry: "harbor.xxx.cn:xxx"
    image: ingress-nginx/defaultbackend-amd64
    tag: "1.5"

部署

multi-cluster-ingress-nginx项目,打包helm charts

1
2
cd charts
helm package ingress-nginx

部署

1
helm install ingress-nginx -n ingress-nginx --create-namespace ingress-nginx-4.0.15.tgz -f ingress_values.yaml

为了让nginx-ingress-controller有监听资源的权限(multiclusteringress, endpointslices, and service)。要把karmada-apiserver.config加到nginx-ingress-controller

karmada-kubeconfig-secret.yaml

1
2
3
4
5
6
7
8
9
# karmada-kubeconfig-secret.yaml
apiVersion: v1
data:
  kubeconfig: {data} # karmada-apiserver.config Base64-encoded
kind: Secret
metadata:
  name: kubeconfig
  namespace: ingress-nginx
type: Opaque

修改deployment

1
2
kubectl apply -f karmada-kubeconfig-secret.yaml
kubectl -n ingress-nginx edit deployment ingress-nginx-controller

Example

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: apps/v1
kind: Deployment
metadata:
  ...
spec:
  ...
  template:
    spec:
      containers:
      - args:
        - /nginx-ingress-controller
        - --karmada-kubeconfig=/etc/kubeconfig  # new line
        ...
        volumeMounts:
        ...
        - mountPath: /etc/kubeconfig            # new line
          name: kubeconfig                      # new line
          subPath: kubeconfig                   # new line
      volumes:
      ...
      - name: kubeconfig                        # new line
        secret:                                 # new line
          secretName: kubeconfig                # new line

显示

1
2
3
kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-5b8b6d64d-b4kqn   1/1     Running   0          16m

测试

ingress_test.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: networking.karmada.io/v1alpha1
kind: MultiClusterIngress
metadata:
  name: demo-localhost
  namespace: default
spec:
  ingressClassName: nginx
  rules:
  - host: demo.localdev.me
    http:
      paths:
      - backend:
          service:
            name: serve
            port:
              number: 80
        path: /web
        pathType: Prefix

应用部署先部署MCS程序,然后再部署MultiClusterIngress

访问

1
2
3
kubectl apply -f ingress_test.yaml
curl -H 'Host:demo.localdev.me' http://127.0.0.1:80/web
'hello from cluster xxxxx'

通过proxy api管理member集群

https://karmada.io/docs/userguide/globalview/aggregated-api-endpoint

cluster-proxy-rbac.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: cluster-proxy-clusterrole
rules:
- apiGroups:
  - 'cluster.karmada.io'
  resources:
  - clusters/proxy
  resourceNames:
  - member1
  - member2
  - member3
  verbs:
  - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: cluster-proxy-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-proxy-clusterrole
subjects:
  - kind: User
    name: "system:admin"
1
1
kubectl --kubeconfig karmada-apiserver.config apply -f cluster-proxy-rbac.yaml

验证

1
kubectl --kubeconfig karmada-apiserver.config get --raw /apis/cluster.karmada.io/v1alpha1/clusters/{clustername}/proxy/api/v1/nodes

通过Bearer token访问proxy api

member-proxy-rbac.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: v1
kind: ServiceAccount
metadata:
  name: custom-admin
  namespace: karmada-custom-user


apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  name: custom-admin
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
  kind: Group
  name: system:serviceaccounts:karmada-custom-user

赋予sa,member集群的管理权限

1
kubectl --kubeconfig=member1-config  -f  member-proxy-rbac.yaml

cluster-proxy-rbac.yaml 同样的sa

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
apiVersion: v1
kind: ServiceAccount
metadata:
  name: custom-admin
  namespace: karmada-custom-user

---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: custom-cluster-proxy-clusterrole
rules:
- apiGroups:
  - 'cluster.karmada.io'
  resources:
  - clusters/proxy
  resourceNames:
  - member1
  - member2
  verbs:
  - '*'
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: custom-cluster-proxy-clusterrolebinding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: custom-cluster-proxy-clusterrole
subjects:
- kind: ServiceAccount
  name: custom-admin
  namespace: karmada-custom-user
- kind: Group
  name: "system:serviceaccounts"
- kind: Group
  name: "system:serviceaccounts:karmada-custom-user"

在host集群apply

1
1
kubectl --kubeconfig karmada-apiserver.config apply -f cluster-proxy-rbac.yaml

获取sa的token

1
2
# 实际情况注意namespace 和 secret name
kubectl --kubeconfig karmada-apiserver.config -n karmada-custom-user describe secret $(kubectl -n karmada-custom-user get secret | grep admin | awk '{print $1}')
1
2
curl -k -X GET 'https://xxxx:5443/apis/cluster.karmada.io/v1alpha1/clusters/member1/proxy/api/v1/nodes' \
-H 'Authorization: Bearer {token}'

简化安装

安装前前置条件

  • 集群集之间的Pod CIDR 和 Service CIDR必须不一样
  • 集群集版本1.21或以上

karmada

基于karmada1.3的helm chart进行修改,添加ANP相关部署。

镜像准备

证书准备

openssl_host.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
[req]
distinguished_name = req_distinguished_name
req_extensions = req_ext
prompt = no

[req_distinguished_name]
CN = proxy
O = system:nodes

[req_ext]
subjectAltName = @alt_names

[alt_names]
IP.1 = 127.0.0.1
DNS.1 = kubernetes
DNS.2 = localhost
IP.2 = xx.xx.xx.57[root@server-44904695-b3bc-4721-8bad-725d6497a69a install-karmada]# cat openssl_host.conf
[req]
distinguished_name = req_distinguished_name
req_extensions = req_ext
prompt = no

[req_distinguished_name]
CN = system:admin
O = system:masters

[req_ext]
subjectAltName = @alt_names

[alt_names]
IP.1 = 127.0.0.1
DNS.1 = kubernetes.default.svc
DNS.2 = *.etcd.{namespace}.svc.cluster.local #注意部署的namespace 替换{namespace}
DNS.3 = *.{namespace}.svc.cluster.local
DNS.4 = *.{namespace}.svc
DNS.5 = localhost
# 上面必填,下面是自定义填入IP,或DNS(建议使用,泛用性高)
IP.2 = xxxxx # 填入host 集群IP

openssl_agent.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[req]
distinguished_name = req_distinguished_name
req_extensions = req_ext
prompt = no

[req_distinguished_name]
CN = proxy
O = system:nodes

[req_ext]
subjectAltName = @alt_names

[alt_names]
IP.1 = 127.0.0.1
DNS.1 = kubernetes
DNS.2 = localhost
# 上面必填,下面是自定义填入IP,或DNS(建议使用,泛用性高)
IP.2 = xxxxx # 填入proxy 集群IP

生成证书

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
#!/bin/sh
set -e

SSL_DIR=ssl
valid_day=36500
rm -rf $SSL_DIR
mkdir -p $SSL_DIR

ANP_SSL_DIR=anp
rm -rf $ANP_SSL_DIR
mkdir -p $ANP_SSL_DIR

SSL_CONF=openssl_host.conf
AGENT_SSL_CONF=openssl_agent.conf

# 生成karmada证书
openssl req -newkey rsa:2048 -nodes -keyout $SSL_DIR/ca.key -x509 -days $valid_day -out $SSL_DIR/ca.crt -subj "/C=xx/ST=x/L=x/O=x/OU=x/CN=ca/emailAddress=x/"
openssl genrsa -out $SSL_DIR/karmada.key 2048
openssl req -new -key $SSL_DIR/karmada.key -out $SSL_DIR/karmada.csr -config $SSL_CONF
openssl x509 -req -days $valid_day -in $SSL_DIR/karmada.csr -CA $SSL_DIR/ca.crt -CAkey $SSL_DIR/ca.key -CAcreateserial -out $SSL_DIR/karmada.crt -extensions req_ext -extfile $SSL_CONF

openssl genrsa -out $SSL_DIR/frontProxyKey.key 2048
openssl req -new -key $SSL_DIR/frontProxyKey.key -out $SSL_DIR/frontProxyKey.csr -config $SSL_CONF
openssl x509 -req -days $valid_day -in $SSL_DIR/frontProxyKey.csr -CA $SSL_DIR/ca.crt -CAkey $SSL_DIR/ca.key -CAcreateserial -out $SSL_DIR/frontProxyKey.crt -extensions req_ext -extfile $SSL_CONF

# 生成proxy证书
openssl req -newkey rsa:2048 -nodes -keyout ${ANP_SSL_DIR}/ca.key -x509 -days $valid_day -out ${ANP_SSL_DIR}/ca.crt -subj "/C=xx/ST=x/L=x/O=x/OU=x/CN=ca/emailAddress=x/"
openssl genrsa -out ${ANP_SSL_DIR}/proxy-frontend.key 2048
openssl req -new -key ${ANP_SSL_DIR}/proxy-frontend.key -out ${ANP_SSL_DIR}/proxy-frontend.csr -config $AGENT_SSL_CONF
openssl x509 -req -days $valid_day -in ${ANP_SSL_DIR}/proxy-frontend.csr -CA ${ANP_SSL_DIR}/ca.crt -CAkey ${ANP_SSL_DIR}/ca.key -CAcreateserial -out ${ANP_SSL_DIR}/proxy-frontend.crt -extensions req_ext -extfile $AGENT_SSL_CONF

openssl genrsa -out ${ANP_SSL_DIR}/agent.key 2048
openssl req -new -key ${ANP_SSL_DIR}/agent.key -out ${ANP_SSL_DIR}/agent.csr -config $AGENT_SSL_CONF
openssl x509 -req -days $valid_day -in ${ANP_SSL_DIR}/agent.csr -CA ${ANP_SSL_DIR}/ca.crt -CAkey ${ANP_SSL_DIR}/ca.key -CAcreateserial -out ${ANP_SSL_DIR}/agent.crt -extensions req_ext -extfile $AGENT_SSL_CONF

应答填写

host.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
global:
  imageRegistry: "harbor.xxx.cn:20000" #指定仓库

# 镜像名字覆盖可以参考上面helm安装部分
#apiServer:
#  image:
#    repository: karmada/kube-apiserver


certs:
  custom:
    mode: custom  # 自定义证书模式  
    caCrt: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    caKey: |
      -----BEGIN PRIVATE KEY-----
      -----END PRIVATE KEY-----
    crt: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    key: |
      -----BEGIN RSA PRIVATE KEY-----
      -----END RSA PRIVATE KEY-----
    frontProxyCaCrt: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    frontProxyCrt: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    frontProxyKey: |
      -----BEGIN RSA PRIVATE KEY-----
      -----END RSA PRIVATE KEY-----

proxyServer:
  install: true
  cert:
    caCrt: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    crt: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    key: |
      -----BEGIN RSA PRIVATE KEY-----
      -----END CERTIFICATE-----
    clusterCaCrt: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----      
    clusterCrt: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    clusterKey: |
      -----BEGIN RSA PRIVATE KEY-----
      -----END RSA PRIVATE KEY-----

agent.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
global:
  imageRegistry: "harbor.xxx.cn:20000"

installMode: agent  # karmada agent 模式
agent:
  clusterName: member1
  clusterEndpoint: "https://xxxx:6443" #memeber集群api-server接口
  kubeconfig:
    server: "https://xxxx:5443" # karamada host集群接口,和证书有关,证书有问题就连不上
    caCrt: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    crt: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    key: |
      -----BEGIN RSA PRIVATE KEY-----
      -----END RSA PRIVATE KEY-----
proxyAgent:
  install: true
  proxyServerIP: xxxx # proxy IP 用于Proxy server
  clusterIP: xxxx    # member IP 用于连接agent
  cert:
    clusterCaCrt: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    clusterCrt: |
      -----BEGIN CERTIFICATE-----
      -----END CERTIFICATE-----
    clusterKey: |
      -----BEGIN RSA PRIVATE KEY-----
      -----END RSA PRIVATE KEY-----

host部署

image-20221021142651039

image-20221021142732380

  • 填好对应名称host为karmada,命名空间默认写karmada-system(要和上面证书对应) ,编辑yaml填入对应的应答。然后启动

    image-20221021142858478

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    kubectl get pods -n karmada -n karmada-system
    NAME                                               READY   STATUS    RESTARTS   AGE
    etcd-0                                             1/1     Running   0          4h9m
    karmada-aggregated-apiserver-f67d5c5b7-9bqqn       1/1     Running   2          4h9m
    karmada-apiserver-5cb8b8f46c-8nzcn                 1/1     Running   0          4h9m
    karmada-controller-manager-848998c768-zlmkv        1/1     Running   4          4h9m
    karmada-kube-controller-manager-658bb465ff-dmqgg   1/1     Running   0          10s
    karmada-proxy-server-6f78f4b97-nnp9b               1/1     Running   0          4h9m
    karmada-scheduler-5f54777cc-2s4bl                  1/1     Running   0          4h9m
    karmada-webhook-85b4db8877-q7xc5                   1/1     Running   2          4h9m
    

agent部署

member集群部署名称为karmada-agent,编辑yaml填入对应的应答。然后启动

image-20221021171349328

1
2
3
4
kubectl get pods -n karmada-system
NAME                                        READY   STATUS    RESTARTS   AGE
karmada-agent-7db99c489c-lrdqr              1/1     Running   0          4h7m
karmada-agent-proxy-agent-cbc7d5988-bq9mg   1/1     Running   0          4h7m

验证karmada部署下发应用

在host执行

1
2
3
4
# 获取karmada-apiserver.config
HOST_IP=host集群IP
kubectl get secret -n karmada-system karmada-kubeconfig -o jsonpath={.data.kubeconfig} | base64 -d > karmada-apiserver.config
sed -i'' -e "s/karmada-apiserver.karmada-system.svc.cluster.local/${HOST_IP}/g" karmada-apiserver.config
1
2
3
4
5
# 查看member集群
kubectl get cluster --kubeconfig karmada-apiserver.config
NAME      VERSION         MODE   READY   AGE
member1   v1.22.10+k3s1   Pull   True    6h16m
member2   v1.22.10+k3s1   Pull   True    156m

nginx.yaml 模拟下发工作负载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx
---

apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: nginx-propagation
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2
    replicaScheduling:
      replicaDivisionPreference: Weighted
      replicaSchedulingType: Divided
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - member1
            weight: 1
          - targetCluster:
              clusterNames:
                - member2
            weight: 2

部署

1
1
kubectl apply -f nginx.yaml  --kubeconfig karmada-apiserver.config

验证pod分发

1
2
1
2
kubectl get pods -A --kubeconfig member1-config
kubectl get pods -A --kubeconfig member2-config

结果(检验ANP) karmadactl 获取途径

1
2
3
4
5
1
2
3
4
5
karmadactl get pod --kubeconfig karmada-apiserver.config
NAME                     CLUSTER   READY   STATUS    RESTARTS   AGE
nginx-6799fc88d8-f4mk6   member1   1/1     Running   0          142m
nginx-6799fc88d8-hcbz6   member1   1/1     Running   0          142m
nginx-6799fc88d8-wxghk   member2   1/1     Running   0          142m

MCS/MCI 功能

submariner-broker

在host集群上面安装,安装submariner所需的crd

image-20221021145004051

填好就可以直接部署

准备参数给operator安装使用所有参数保持一致,可以根据实际情况修改获取命令。

1
2
3
4
5
6
7
SUBMARINER_BROKER_CA=$(kubectl -n submariner-broker get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-broker-submariner-k8s-broker-client')].data['ca\.crt']}")


SUBMARINER_BROKER_TOKEN=$(kubectl -n submariner-broker get secrets -o jsonpath="{.items[?(@.metadata.annotations['kubernetes\.io/service-account\.name']=='submariner-broker-submariner-k8s-broker-client')].data.token}"|base64 --decode)

# submariner-operator之间交互,加密通信,保持一致
SUBMARINER_PSK=$(LC_CTYPE=C tr -dc 'a-zA-Z0-9' < /dev/urandom | fold -w 64 | head -n 1)

验证

1
2
3
4
5
6
7
8
9
10
kubectl get crd  |grep submariner.io
brokers.submariner.io                   2022-10-21T02:38:47Z
clusterglobalegressips.submariner.io    2022-10-21T02:38:57Z
clusters.submariner.io                  2022-10-21T02:16:07Z
endpoints.submariner.io                 2022-10-21T02:16:07Z
gateways.submariner.io                  2022-10-21T02:16:07Z
globalegressips.submariner.io           2022-10-21T02:38:57Z
globalingressips.submariner.io          2022-10-21T02:38:57Z
servicediscoveries.submariner.io        2022-10-21T02:38:47Z
submariners.submariner.io               2022-10-21T02:38:48Z

submariner-operator

安装在需要打通网络集群上面

1
2
3
# 集群其中一个节点配置就可以了, submariner-gateway运行需要
kubectl label nodes xxx submariner.io/gateway=true
kubectl annotate node xxx gateway.submariner.io/public-ip=ipv4:<node-ip>

valus.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
---
submariner:
  clusterId: "member1" # 集群标识
  clusterCidr: "10.44.0.0/16" # 必填
  serviceCidr: "10.45.0.0/16" # 必填
  #globalCidr: "242.0.1.0/24" 
  images:
    repository: harbor.xxx.cn:20000/submariner
    tag: "0.12.2"
broker: # 填入broker相关信息
  #globalnet: true
  server: "xx.xx.xx.57:6443"  # broker k8s集群apiserver访问地址
  token:  # 按照参数填写
  namespace: "submariner-broker"  # 必填
  ca:  # 按照填写
ipsec:
  psk: "" # 按照参数填写

填好后部署

image-20221021145354931

检查Submariner安装情况

1
2
3
4
5
6
7
8
kubectl get pods -n submariner-operator
NAME                                             READY   STATUS    RESTARTS   AGE
submariner-gateway-mbhsd                         1/1     Running   0          4h33m
submariner-lighthouse-agent-b57968d49-jv7zc      1/1     Running   0          4h33m
submariner-lighthouse-coredns-67fd57c5b7-2584w   1/1     Running   0          4h33m
submariner-lighthouse-coredns-67fd57c5b7-bdqgh   1/1     Running   0          4h33m
submariner-operator-5d549c4df9-7htkr             1/1     Running   1          4h34m
submariner-routeagent-knf7n                      1/1     Running   0          4h33m

获取subctl

1
2
# 安装后两个或以上operator后检查对应集群是否安装成功
subctl diagnose all --kubeconfig member2-config

成功会有√ ,有问题的会有x,要排查问题

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
 ✓ Checking Submariner support for the Kubernetes version
 ✓ Kubernetes version "v1.21.14+k3s1" is supported

 ⚠ Checking Submariner support for the CNI network plugin
 ⚠ Submariner could not detect the CNI network plugin and is using ("generic") plugin. It may or may not work.
 ✓ Trying to detect the Calico ConfigMap

 ✓ Checking gateway connections
 ✓ All connections are established

 ✓ Non-Globalnet deployment detected - checking if cluster CIDRs overlap
 ✓ Clusters do not have overlapping CIDRs
 ✓ Checking Submariner pods
 ✓ All Submariner pods are up and running

 ⚠ Starting with Kubernetes 1.23, the Pod Security admission controller expects namespaces to have security labels. Without these, you will see warnings in subctl's output. subctl should work fine, but you can avoid the warnings and ensure correct behavior by adding these labels to the namespace submariner-operator:
 ⚠ pod-security.kubernetes.io/enforce privileged
 ⚠ pod-security.kubernetes.io/audit privileged
 ⚠ pod-security.kubernetes.io/warn privileged
 ✓ Checking Submariner support for the kube-proxy mode 
 ✓ The kube-proxy mode is supported

 ✓ Checking the firewall configuration to determine if the metrics port (8080) is allowed
 ✓ Skipping this check as it's a single node cluster

 ✓ Checking the firewall configuration to determine if intra-cluster VXLAN traffic is allowed
 ✓ Skipping this check as it's a single node cluster

 ✓ Globalnet is not installed - skipping

Skipping inter-cluster firewall check as it requires two kubeconfigs. Please run "subctl diagnose firewall inter-cluster" command manually.

验证mcs功能

mcs_crd.yaml,分发svi,sve

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
piVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
  name: serviceexport-policy
spec:
  resourceSelectors:
    - apiVersion: apiextensions.k8s.io/v1
      kind: CustomResourceDefinition
      name: serviceexports.multicluster.x-k8s.io
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2
---        

apiVersion: policy.karmada.io/v1alpha1
kind: ClusterPropagationPolicy
metadata:
  name: serviceimport-policy
spec:
  resourceSelectors:
    - apiVersion: apiextensions.k8s.io/v1
      kind: CustomResourceDefinition
      name: serviceimports.multicluster.x-k8s.io
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2

部署

1
kubectl apply -f mci_crd.yaml --kubeconfig karmada-apiserver.config 

应用分发到member1,通过在member2的svc访问到

mcs.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: serve
spec:
  replicas: 1
  selector:
    matchLabels:
      app: serve
  template:
    metadata:
      labels:
        app: serve
    spec:
      containers:
      - name: serve
        image: harbor.xxx.cn:20000/library/serve:0a40de8
        args:
        - "--message='hello from cluster 1 (Node:  Pod:  Address: )'"
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
---      
apiVersion: v1
kind: Service
metadata:
  name: serve
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: serve
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: mcs-workload
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: serve
    - apiVersion: v1
      kind: Service
      name: serve
  placement:
    clusterAffinity:
      clusterNames:
        - member1

---
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
  name: serve
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: serve-export-policy
spec:
  resourceSelectors:
    - apiVersion: multicluster.x-k8s.io/v1alpha1
      kind: ServiceExport
      name: serve
  placement:
    clusterAffinity:
      clusterNames:
        - member1


---
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
  name: serve
spec:
  type: ClusterSetIP
  ports:
  - port: 80
    protocol: TCP
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: serve-import-policy
spec:
  resourceSelectors:
    - apiVersion: multicluster.x-k8s.io/v1alpha1
      kind: ServiceImport
      name: serve
  placement:
    clusterAffinity:
      clusterNames:
        - member2

部署

1
kubectl apply -f mcs.yaml --kubeconfig karmada-apiserver.config 

验证

1
2
3
4
1
2
3
4
kubectl --kubeconfig member2-config run tmp-shell --rm -i --tty --image submariner/nettest:0.12.2 -- /bin/bash

curl derived-serve
'hello from cluster 1 (Node: server-605ae265-df2b-4e91-9efa-19069d84f2d0 Pod: serve-59994d98f6-4f2t8 Address: 10.44.0.7)'

ingress-nginx

ingress-nginx应答

获取参数karmadaKubeconfig,karmada-apiserver.config内容 base64 编码

values.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
controller:
  name: controller
  image:
    registry: "harbor.xxx.cn:20000"
    image: ingress-nginx/multi-cluster-controller
    tag: "v1.1.1"
    digest: ""
  admissionWebhooks:
    patch:
      image:
        registry: "harbor.xxx.cn:20000"
        image: ingress-nginx/kube-webhook-certgen
        tag: v1.1.1
        digest: ""

defaultBackend:
  image:
    registry: "harbor.xxx.cn:20000"
    image: ingress-nginx/defaultbackend-amd64
    tag: "1.5"
# karmada-apiserver.config Base64-encoded
karmadaKubeconfig: ""

image-20221021171959469

验证ingress-nginx

1
2
3
4
kubectl get pods -n ingress-nginx
NAME                                       READY   STATUS    RESTARTS   AGE
ingress-nginx-controller-9cf6c9f5c-4zrmd   1/1     Running   0          5h17m
svclb-ingress-nginx-controller-kbkrb       2/2     Running   0          5h17m

mci.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mci
spec:
  replicas: 2
  selector:
    matchLabels:
      app: mci
  template:
    metadata:
      labels:
        app: mci
    spec:
      containers:
      - name: serve
        image: harbor.xxx.cn:20000/library/serve:0a40de8
        args:
        - "--message='hello from cluster 1 (Node:  Pod:  Address: )'"
        env:
          - name: NODE_NAME
            valueFrom:
              fieldRef:
                fieldPath: spec.nodeName
          - name: POD_NAME
            valueFrom:
              fieldRef:
                fieldPath: metadata.name
---      
apiVersion: v1
kind: Service
metadata:
  name: mci
spec:
  ports:
  - port: 80
    targetPort: 8080
  selector:
    app: mci
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: mci-workload
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: mci
    - apiVersion: v1
      kind: Service
      name: mci
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2
    replicaScheduling:
      replicaDivisionPreference: Weighted
      replicaSchedulingType: Divided
      weightPreference:
        staticWeightList:
          - targetCluster:
              clusterNames:
                - member1
            weight: 1
          - targetCluster:
              clusterNames:
                - member2
            weight: 1
---
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceExport
metadata:
  name: mci

---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: mci-export-policy
spec:
  resourceSelectors:
    - apiVersion: multicluster.x-k8s.io/v1alpha1
      kind: ServiceExport
      name: mci
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2
        
---
apiVersion: multicluster.x-k8s.io/v1alpha1
kind: ServiceImport
metadata:
  name: mci
spec:
  type: ClusterSetIP
  ports:
  - port: 80
    protocol: TCP

---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: mci-import-policy
spec:
  resourceSelectors:
    - apiVersion: multicluster.x-k8s.io/v1alpha1
      kind: ServiceImport
      name: mci
  placement:
    clusterAffinity:
      clusterNames:
        - member1
        - member2
---
apiVersion: networking.karmada.io/v1alpha1
kind: MultiClusterIngress
metadata:
  name: demo-localhost
  namespace: default
spec:
  ingressClassName: nginx
  rules:
  - host: demo.localdev.me
    http:
      paths:
      - backend:
          service:
            name: mci
            port:
              number: 80
        path: /web
        pathType: Prefix

1
2
3
4
5
6
7
kubectl apply -f mci.yaml --kubeconfig karmada-apiserver.config

# 多次返回的结果不相同,证明负载均衡正常
curl -H 'Host:demo.localdev.me' http://127.0.0.1:80/web
'hello from cluster 1 (Node: server-605ae265-df2b-4e91-9efa-19069d84f2d0 Pod: mci-bf5dc988d-zfrp4 Address: 10.44.0.23)'

'hello from cluster 1 (Node: server-cdf0266c-393f-471c-b29b-51004e99c426 Pod: mci-bf5dc988d-d5wgv Address: 10.46.0.27)'