Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

手把手教你把 SafeLine 落地到云原生环境 #289

jangrui started this conversation in Show and tell
Discussion options

反馈内容

手把手教你把 SafeLine 落地到云原生环境

核心一览

SealOS 一键构建 k8s 环境

环境信息

IP 主机名 操作系统 功能作用 数据盘
192.168.20.253 / / lb
192.168.20.101 silkdo-1 Anolis 8.8 master+longhorn /dev/vdc
192.168.20.102 silkdo-2 Anolis 8.8 master+longhorn /dev/vdc
192.168.20.103 silkdo-3 Anolis 8.8 master+longhorn /dev/vdc
192.168.20.104 silkdo-4 Anolis 8.8 worker
192.168.20.105 silkdo-5 Anolis 8.8 worker
192.168.20.106 silkdo-6 Anolis 8.8 worker
192.168.20.107 silkdo-7 Anolis 8.8 worker
192.168.20.108 silkdo-8 Anolis 8.8 worker
192.168.20.109 silkdo-8 Anolis 8.8 worker

安装 Sealos

sudo cat > /etc/yum.repos.d/labring.repo << EOF
[fury]
name=labring Yum Repo
baseurl=https://yum.fury.io/labring/
enabled=1
gpgcheck=0
EOF
sudo yum clean all
sudo yum install -y sealos

Sealos 命令说明

配置clusterfile

sealos gen \
 --masters 192.168.20.101,192.168.20.102,192.168.20.103 \
 --nodes 192.168.20.104,192.168.20.105,192.168.20.106,192.168.20.107,192.168.20.108,192.168.20.109 \
 --pk ~/.ssh/id_rsa \
 labring/kubernetes:v1.25.11 \
 labring/nerdctl:v1.2.1 \
 labring/helm:v3.12.0 \
 labring/cilium:v1.13.0
 > clusterfile
# 替换国内镜像源
sed -i '/^ImageRepository/ s|""|"registry.aliyuncs.com/google_containers"|' clusterfile

配置负载均衡

我这里的云平台提供 LB 服务,但我又不想让 SVC 使用 LoadBalancer,所以,我这里只配置了 80、443、6443 三个端口,其中 80、443 是给 Ingress Nginx 准备的,6443 是给 APIServer 做负载用的。

另外,如果你需要在集群外访问 APIServer,则需要把负载 IP 也添加到 CertSANs 列表中。

# CertSANs 添加 lb/vip
sed -i '/^ CertSANs/a\ - 192.168.20.253' clusterfile

自定义 CIDR

如上,我这里计划用 Cilium 作为 CNI 插件,看 clusterfile 配置可以知道 Sealos 部署的 k8s 集群,默认 PodSubnet 是 100.64.0.0/10,而 Cilium 对应的 clusterPoolIPv4PodCIDR 默认为 10.0.0.0/8,所以,我们需要自定义 cilium Operator 的 clusterPoolIPv4PodCIDR 也为 100.64.0.0/10。

cat > Kubefile <<EOF
FROM labring/cilium:v1.13.0

CMD ["cp opt/cilium /usr/bin/","cp opt/hubble /usr/bin/","cilium install --chart-directory charts/cilium --helm-set kubeProxyReplacement=strict,k8sServiceHost=apiserver.cluster.local,k8sServicePort=6443,ipam.operator.clusterPoolIPv4PodCIDR=100.64.0.0/10"]
EOF
sealos build -t labring/cilium:v1.13.0-amd64 --platform linux/amd64 -f Kubefile .

安装 Kubernetes 集群

# 部署 Kubernetes
sealos apply -f clusterfile

至此,我们姑且认为您已经按照上述步骤完成 k8s 集群的安装。

部署 LongHorn 云原生分布式存储

磁盘挂载

如上,我这里规划三个 master 节点作为存储节点,并且磁盘都是 /dev/vdc

DISK=vdc
for i in `seq 1 3`;do cat <<-EOF | ssh 192.168.20.10$i;done
 echo "=========="
 hostname
 parted -s /dev/${DISK} mklabel gpt
 parted -s /dev/${DISK} mkpart p ext4 0 100%
 mkfs.ext4 -F /dev/${DISK}1
 sed -i '/longhorn/d' /etc/fstab
 echo "/dev/${DISK}1 /var/lib/longhorn ext4 defaults 0 0" >> /etc/fstab
 mkdir /var/lib/longhorn
 mount -a
 df -h /var/lib/longhorn
EOF

安装依赖

安装 iSCSI

cat <<-'EOF' | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
 name: longhorn-iscsi-installation
 # namespace: longhorn-system
 labels:
 app: longhorn-iscsi-installation
 annotations:
 command: &cmd OS=$(grep "ID_LIKE" /etc/os-release | cut -d '=' -f 2); if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y open-iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y open-iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y iscsi-initiator-utils && echo "InitiatorName=$(/sbin/iscsi-iname)" > /etc/iscsi/initiatorname.iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid; fi && if [ $? -eq 0 ]; then echo "iscsi install successfully"; else echo "iscsi install failed error code $?"; fi
spec:
 selector:
 matchLabels:
 app: longhorn-iscsi-installation
 template:
 metadata:
 labels:
 app: longhorn-iscsi-installation
 spec:
 tolerations:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 effect: NoSchedule
 - key: node-role.kubernetes.io/master
 operator: Exists
 effect: NoSchedule
 hostNetwork: true
 hostPID: true
 initContainers:
 - name: iscsi-installation
 command:
 - nsenter
 - --mount=/proc/1/ns/mnt
 - --
 - bash
 - -c
 - *cmd
 image: hub.silkdo.com/library/alpine:3.12
 securityContext:
 privileged: true
 containers:
 - name: sleep
 image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
 updateStrategy:
 type: RollingUpdate
EOF

安装 NFSv4 客户端

cat <<-'EOF' | kubectl apply -f -
apiVersion: apps/v1
kind: DaemonSet
metadata:
 name: longhorn-nfs-installation
 # namespace: longhorn-system
 labels:
 app: longhorn-nfs-installation
 annotations:
 command: &cmd OS=$(grep "ID_LIKE" /etc/os-release | cut -d '=' -f 2); if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y nfs-common; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y nfs-client; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y nfs-utils; fi && if [ $? -eq 0 ]; then echo "nfs install successfully"; else echo "nfs install failed error code $?"; fi
spec:
 selector:
 matchLabels:
 app: longhorn-nfs-installation
 template:
 metadata:
 labels:
 app: longhorn-nfs-installation
 spec:
 tolerations:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 effect: NoSchedule
 - key: node-role.kubernetes.io/master
 operator: Exists
 effect: NoSchedule
 hostNetwork: true
 hostPID: true
 initContainers:
 - name: nfs-installation
 command:
 - nsenter
 - --mount=/proc/1/ns/mnt
 - --
 - bash
 - -c
 - *cmd
 image: hub.silkdo.com/library/alpine:3.12
 securityContext:
 privileged: true
 containers:
 - name: sleep
 image: registry.cn-beijing.aliyuncs.com/kubesphereio/pause:3.5
 updateStrategy:
 type: RollingUpdate
EOF

添加标签

kubectl label nodes silkdo-1 node.longhorn.io/create-default-disk=true
kubectl label nodes silkdo-2 node.longhorn.io/create-default-disk=true
kubectl label nodes silkdo-3 node.longhorn.io/create-default-disk=true

在标记节点上创建默认磁盘

部署 LongHorn

helm repo add longhorn https://charts.longhorn.io && helm repo update longhorn
cat <<-'EOF' | helm -n longhorn-system upgrade -i longhorn longhorn/longhorn --version v1.4.3 --create-namespace -f - 
image:
 longhorn:
 engine:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/longhorn-engine
 manager:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/longhorn-manager
 ui:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/longhorn-ui
 instanceManager:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/longhorn-instance-manager
 shareManager:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/longhorn-share-manager
 backingImageManager:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/backing-image-manager
 supportBundleKit:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/support-bundle-kit
 csi:
 attacher:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/csi-attacher
 provisioner:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/csi-provisioner
 nodeDriverRegistrar:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/csi-node-driver-registrar
 resizer:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/csi-resizer
 snapshotter:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/csi-snapshotter
 livenessProbe:
 repository: uhub.service.ucloud.cn/silkdo/longhornio/livenessprobe

service:
 ui:
 type: NodePort
 nodePort: 30890

defaultSettings:
 allowRecurringJobWhileVolumeDetached: true
 createDefaultDiskLabeledNodes: true
 replicaAutoBalance: "best-effort"
 taintToleration: "node-role.kubernetes.io/control-plane:NoSchedule;node-role.kubernetes.io/master:NoSchedule"
 priorityClass: "high-priority"
 nodeDownPodDeletionPolicy: "delete-both-statefulset-and-deployment-pod"
 concurrentAutomaticEngineUpgradePerNodeLimit: "5"

longhornManager:
 tolerations:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 effect: NoSchedule
 - key: node-role.kubernetes.io/master
 operator: Exists
 effect: NoSchedule
longhornDriver:
 tolerations:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 effect: NoSchedule
 - key: node-role.kubernetes.io/master
 operator: Exists
 effect: NoSchedule

longhornUI:
 tolerations:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 effect: NoSchedule
 - key: node-role.kubernetes.io/master
 operator: Exists
 effect: NoSchedule
 nodeSelector:
 node.longhorn.io/create-default-disk: "true"

longhornConversionWebhook:
 tolerations:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 effect: NoSchedule
 - key: node-role.kubernetes.io/master
 operator: Exists
 effect: NoSchedule
 nodeSelector:
 node.longhorn.io/create-default-disk: "true"

longhornAdmissionWebhook:
 tolerations:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 effect: NoSchedule
 - key: node-role.kubernetes.io/master
 operator: Exists
 effect: NoSchedule
 nodeSelector:
 node.longhorn.io/create-default-disk: "true"

longhornRecoveryBackend:
 tolerations:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 effect: NoSchedule
 - key: node-role.kubernetes.io/master
 operator: Exists
 effect: NoSchedule
 nodeSelector:
 node.longhorn.io/create-default-disk: "true"

enablePSP: false
EOF

验证

cat <<-EOF | kubectl apply -f -
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: rwo
spec:
 storageClassName: longhorn
 accessModes:
 - ReadWriteOnce
 resources:
 requests:
 storage: 1Gi

---
kind: Pod
apiVersion: v1
metadata:
 name: rwo
spec:
 containers:
 - name: busybox
 image: busybox
 command:
 - sleep
 - "3600"
 volumeMounts:
 - name: rwo
 mountPath: "/pv-data"
 readOnly: false
 volumes:
 - name: rwo
 persistentVolumeClaim:
 claimName: rwo

---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: rwx
spec:
 storageClassName: longhorn
 accessModes:
 - ReadWriteMany
 resources:
 requests:
 storage: 1Gi

---
kind: Pod
apiVersion: v1
metadata:
 name: rwx
spec:
 containers:
 - name: busybox
 image: busybox
 command:
 - sleep
 - "3600"
 volumeMounts:
 - name: rwx
 mountPath: "/pv-data"
 readOnly: false
 volumes:
 - name: rwx
 persistentVolumeClaim:
 claimName: rwx
EOF

至此,我们姑且认为您已经按照上述步骤完成 LongHorn 的安装。

部署 Ingress Nginx 控制器

前面说过,我不打算使用 Loadbalancer,因为它的代价实在是太高了,但也不想让用户访问时多加一个端口,所以,准备使用 hostNetwork 参数暴露宿主机上的 80 和 443 端口,配合 pod 亲和性和反亲和性,将 Ingress Nginx 的 pod 固定在三个 master 节点上,再利用上述负载均衡让 Ingress 可以暴露在互联网。

VERSION=4.7.1
curl -L https://ghproxy.com/https://github.com/kubernetes/ingress-nginx/releases/download/helm-chart-${VERSION}/ingress-nginx-${VERSION}.tgz -o ~/.cache/helm/repository/ingress-nginx-${VERSION}.tgz
cat << EOF | helm -n ingress-nginx upgrade -i ingress-nginx ~/.cache/helm/repository/ingress-nginx-${VERSION}.tgz --create-namespace -f -
controller:
 image:
 registry: uhub.service.ucloud.cn/silkdo/registry.k8s.io
 digest: 
 digestChroot: 
 config:
 dnsPolicy: ClusterFirstWithHostNet
 reportNodeInternalIp: true
 watchIngressWithoutClass: true
 hostNetwork: true
 hostPort:
 enabled: false
 ports:
 http: 80
 https: 443
 ingressClassResource:
 default: true
 publishService:
 enabled: false
 kind: Deployment
 updateStrategy:
 rollingUpdate:
 maxSurge: 0
 maxUnavailable: 1
 type: RollingUpdate
 tolerations:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 effect: NoSchedule
 - key: node-role.kubernetes.io/master
 operator: Exists
 effect: NoSchedule
 affinity:
 nodeAffinity:
 requiredDuringSchedulingIgnoredDuringExecution:
 nodeSelectorTerms:
 - matchExpressions:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 replicaCount: 3
 service:
 enabled: true
 opentelemetry:
 enabled: true
 image: uhub.service.ucloud.cn/silkdo/registry.k8s.io/ingress-nginx/opentelemetry:v20230527
 containerSecurityContext:
 allowPrivilegeEscalation: false
 admissionWebhooks:
 enabled: true
 patch:
 enabled: true
 image:
 registry: uhub.service.ucloud.cn/silkdo/registry.k8s.io
 digest: 
 tolerations:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 effect: NoSchedule
 - key: node-role.kubernetes.io/master
 operator: Exists
 effect: NoSchedule
 affinity:
 nodeAffinity:
 requiredDuringSchedulingIgnoredDuringExecution:
 nodeSelectorTerms:
 - matchExpressions:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
defaultBackend:
 enabled: true
 name: defaultbackend
 image:
 registry: uhub.service.ucloud.cn/silkdo/registry.k8s.io
 digest: 
 updateStrategy:
 rollingUpdate:
 maxSurge: 0
 maxUnavailable: 1
 type: RollingUpdate
 tolerations:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 effect: NoSchedule
 - key: node-role.kubernetes.io/master
 operator: Exists
 effect: NoSchedule
 affinity:
 nodeAffinity:
 requiredDuringSchedulingIgnoredDuringExecution:
 nodeSelectorTerms:
 - matchExpressions:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists
 podAntiAffinity:
 requiredDuringSchedulingIgnoredDuringExecution:
 - labelSelector:
 matchExpressions:
 - key: app.kubernetes.io/component
 operator: In
 values:
 - default-backend
 topologyKey: kubernetes.io/hostname
 replicaCount: 3
EOF

部署 CertManager 控制器

单纯为了获取免费 SSL 证书

VERSION=1.12.2
curl -L https://charts.jetstack.io/charts/cert-manager-v${VERSION}.tgz -o ~/.cache/helm/repository/cert-manager-v${VERSION}.tgz
cat <<-'EOF' | helm -n cert-manager upgrade -i cert-manager ~/.cache/helm/repository/cert-manager-v${VERSION}.tgz --create-namespace -f -
installCRDs: true

strategy:
 type: RollingUpdate
 rollingUpdate:
 maxSurge: 0
 maxUnavailable: 1

image:
 repository: uhub.service.ucloud.cn/silkdo/quay.io/jetstack/cert-manager-controller

extraEnv:
 TZ: Asia/Shanghai
tolerations:
- key: node-role.kubernetes.io/control-plane
 operator: Exists
 effect: NoSchedule
- key: node-role.kubernetes.io/master
 operator: Exists
 effect: NoSchedule
affinity:
 nodeAffinity:
 requiredDuringSchedulingIgnoredDuringExecution:
 nodeSelectorTerms:
 - matchExpressions:
 - key: node-role.kubernetes.io/control-plane
 operator: Exists

prometheus:
 enabled: true
 servicemonitor:
 enabled: true
 endpointAdditionalProperties:
 relabelings:
 - replacement: base
 targetLabel: group

webhook:
 image:
 repository: uhub.service.ucloud.cn/silkdo/quay.io/jetstack/cert-manager-webhook

cainjector:
 image:
 repository: uhub.service.ucloud.cn/silkdo/quay.io/jetstack/cert-manager-cainjector

acmesolver:
 image:
 repository: uhub.service.ucloud.cn/silkdo/quay.io/jetstack/cert-manager-acmesolver

startupapicheck:
 image:
 repository: uhub.service.ucloud.cn/silkdo/quay.io/jetstack/cert-manager-ctl
EOF

部署 SafeLine

Helm 部署 SafeLine

helm repo add jangrui https://github.com/jangrui/SafeLine --force-update
helm -n safeline upgrade -i safeline jangrui/safeline --create-namespace

创建一个 HTTP01 类型的 ACME 发行者

cat <<-'EOF' | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
 name: waf.silkdo.com
 namespace: safeline
spec:
 acme:
 email: admin@jangrui.com
 server: https://acme-v02.api.letsencrypt.org/directory
 privateKeySecretRef:
 name: waf.silkdo.com.tls
 solvers:
 - http01:
 ingress:
 class: nginx
EOF

创建 Ingress

cat <<-'EOF' | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
 name: waf.silkdo.com
 namespace: safeline
 annotations:
 cert-manager.io/issuer: "waf.silkdo.com"
 nginx.ingress.kubernetes.io/ssl-redirect: "true"
 nginx.ingress.kubernetes.io/service-upstreamtrue: "true"
 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
 ingressClassName: nginx
 rules:
 - host: waf.silkdo.com
 http:
 paths:
 - path: /
 pathType: ImplementationSpecific
 backend:
 service:
 name: safeline-mgt-api
 port: 
 number: 1443
 tls:
 - hosts:
 - waf.silkdo.com
 secretName: waf.silkdo.com.tls
EOF

此处省略 DNS 解析过程,默认认为您的域名已经做好 DNS 解析。

至此,您可以通过域名访问直接访问 SafeLine 的管理后台;另一种方法,也可以通过 NodePort 的形式去访问。

配置 WAF

既然 SafeLine 已经安装好了 ,那么把我们的域名交给 WAF 去防护肯定是首选了。

添加人机验证

人机验证

这里单纯为了一眼看出下面配置的网站是经过 WAF 防护的。

配置防护站点

我们这里直接把刚才的 waf.silkdo.com 作为要防护站点。

waf website

刚才我们已经给 SafeLine 添加了一个 Ingress,但后端 SVC 用的是 safeline-mgt-api,而现在,我们需要 Ingress 也经过 WAF 防护,此时只需要稍作修改,把 Ingress 的后端 SVC 改成 safeline-tengine 即可。

cat <<-'EOF' | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
 name: waf.silkdo.com
 namespace: safeline
 annotations:
 cert-manager.io/issuer: "waf.silkdo.com"
 nginx.ingress.kubernetes.io/ssl-redirect: "true"
 nginx.ingress.kubernetes.io/service-upstreamtrue: "true"
 nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
spec:
 ingressClassName: nginx
 rules:
 - host: waf.silkdo.com
 http:
 paths:
 - path: /
 pathType: ImplementationSpecific
 backend:
 service:
 name: safeline-tengine
 port: 
 number: 80
 tls:
 - hosts:
 - waf.silkdo.com
 secretName: waf.silkdo.com.tls
EOF

别忘了 SVC 端口也需要跟着变动。

验证

现在,打开一个无痕窗口,访问你的域名,出现上图,说明你的 WAF 成功生效,至此,Safeline 落地到云原生环境完结。

You must be logged in to vote

Replies: 8 comments 10 replies

Comment options

好内容放 issue 里容易被淹没,我们把仓库的 discussion 开了给师傅放到这边来。

You must be logged in to vote
0 replies
Comment options

我参考这篇文章,部署safeline 已经成功,在配置防护站点后,访问报 502,请问教程是否还有没有记录到的配置

You must be logged in to vote
1 reply
Comment options

最新版的 chart 包是 3.16.1,我测试是没有问题的。

可以提供日志看看

Comment options

按这个配置代理waf自身是可以正常访问的,但代理其他命名空间下的web服务访问不了;
但通过将safeline-tengine设置为loadbalancer后再设置代理就可以访问了(我的ingress controller是通过loadbalancer配置的)

基本流程:
loadbalancer ---> safeline-tengine ---> backend service

但这样配置后等于没有和ingress配合使用,不知道你那边是否也是这种配置的?

You must be logged in to vote
1 reply
Comment options

  • 其他 ns 下的 ing 就没有用了,需要全转到 safeline ns 下。
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
 annotations:
 nginx.ingress.kubernetes.io/proxy-body-size: 4096m
 nginx.ingress.kubernetes.io/service-upstreamtrue: "true"
 nginx.ingress.kubernetes.io/ssl-redirect: "true"
 name: xxx.xxx.xxx
 namespace: safeline
spec:
 ingressClassName: nginx
 rules:
 - host: xxx.xxx.xxx
 http:
 paths:
 - backend:
 service:
 name: safeline-tengine # 所有 ns 中的 web 服务都可以让走 safeline namespace 中的 tengine 服务
 port:
 number: 80
 path: /
 pathType: Prefix
 tls:
 - hosts:
 - xxx.xxx.xxx
 secretName: sixxx.xxx.tls
Comment options

看了下官方文档,ingress controller通过安装safeline插件可以实现代理其他ns下域名,但这种方式有些功能是有限制无法使用的,如人机验证、身份认证

基本流程:
client -> lb -> ingress controller(安装有safeline插件) -> safeline-detector -> backend service

注意事项:

# safeline.yaml
apiVersion: v1
kind: ConfigMap
metadata:
 name: safeline
 namespace: ingress-nginx
data:
 host: "detector_host" # 雷池检测引擎的地址, 此处需要填写完整域名,否则会报错,如 safeline-detector.safeline.svc.cluster.local
 port: "8000"
You must be logged in to vote
1 reply
Comment options

我艹,你不说我都没注意到已经出 ingress-nginx 的插件了。
你可以试试全新安装集成方式

Comment options

我已经测试过了,是可以代理其他ns下的ingress

You must be logged in to vote
0 replies
Comment options

根据教程部署完之后,反代可以 从 域名到safeline-tengine 但是 人机验证 封锁 都无效,只有反代功能有生效

You must be logged in to vote
3 replies
Comment options

我这边好着呢。
可以提供一些详细信息,研究研究。

Comment options

师傅您的 镜像仓库的镜像是不是和 docker io 的不一样?
之前因为 我在马来西亚拉取不到 您的 values里面的 仓库 我转向 docker io
今天我尝试一下 用values.yml里面的仓库拉取的,可以正常运行

Comment options

对,tengine 和 detector 这两个服务是定制镜像,也就是按照官文把 socket 方式改为 http,存放在个人仓库,其他镜像默认使用雷池官方华为镜像仓库 swr.cn-east-3.myhuaweicloud.com/chaitin-safeline

Comment options

emmm小白路过,请问我能否认为这个教程是自行构建雷池镜像的过程?(我正在尝试自行构建镜像)

You must be logged in to vote
1 reply
Comment options

不是

Comment options

大佬,6.9.1版本无法上传证书,是不是也是因为tengine 和 detector 这两个服务是的镜像没有修改成http的原因,我用的是官方的镜像
image

You must be logged in to vote
3 replies
Comment options

应该不是这个原因,改了你定制过的镜像之后还是有这个问题

Comment options

卸载后,保留数据库 pvc,把其余 pvc 删掉,重新部署一遍。

helm -n safeline un safeline
kubectl -n safeline get pvc -o custom-columns=NAME:.metadata.name | grep ^safeline | xargs -I {} kubectl -n safeline delete pvc {}
helm repo add jangrui https://github.com/jangrui/SafeLine --force-update
helm -n safeline upgrade -i safeline jangrui/safeline --create-namespace

如果遇到 safeline-tengine 缺少 error.log 而起不来,需要手动创建。例如:

mkdir safeline-safeline-logs-pvc-2ca226e1-0d23-4fb5-aa35-d4cde11b8001/nginx
kubectl -n safeline delete po -l component=tengine
Comment options

6.10.2+ 已修复

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
doc Improvements or additions to documentation
Converted from issue

This discussion was converted from issue #284 on September 06, 2023 07:24.

AltStyle によって変換されたページ (->オリジナル) /