Chemmy's Blog

chengming0916@outlook.com

服务端

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
 # 安装nfs服务
sudo apt install nfs-common nfs-kernel-server portman -y

# 创建共享目录
sudo mkdir -p /mnt/share/
sudo chmod 777 /mnt/share

# 编辑映射文件
sudo vim /etc/exports

# 共享目录
/mnt/share *(rw,sync)

# 设置ACL赋予nfsnobody权限
sudo setfacl -m u:nfsbody:rw /mnt/share

# 启动NFS服务
sudo /etc/init.d/nfs-kernel-server start
sudo /etc/init.d/nfs-common start

# 检查服务启动
sudo showmount -e

客户端

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 安装nfs
sudo apt install nfs-common

# 新建本地文件夹
sudo mkdir /mnt/nfs

# sudo mount [nfs_server]:[server_dir] [local_mount_point]
# [nfs_server] nfs服务器ip
# [server_dir] 服务器共享路径
# [local_mount_point] 本地挂载路径
sudo mount [nfs_server]:[server_dir] [local_mount_point]
# 示例
sudo mount 192.168.1.100:/mnt/share /mnt/nfs

# 查看挂载是否成功
df -Th

编辑fstab 配置自动挂载

1
2
3
sudo vim /etc/fstab
# 在最后一行添加
[nfs_server]:/mnt/share /mnt/nfs nfs defaults 0 0

卸载

1
2
3
sudo umount [local_mount_point] 
#示例
sudo umount /mnt/nfs

需要认证参考为 Linux 客户端设置具有基于 Kerberos 的身份验证的 NFS 服务器 (linux-console.net)

安全相关参考如何确保NFS服务安全-腾讯云开发者社区-腾讯云 (tencent.com)

用户身份映射参考NFS服务的用户身份映射 - wangmo - 博客园 (cnblogs.com)

本地路径映射(HostPath)

HostPath 卷存在许多安全风险,最佳做法是尽可能避免使用 HostPath。 当必须使用 HostPath 卷时,它的范围应仅限于所需的文件或目录,并以只读方式挂载。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: registry.k8s.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# 宿主机上目录位置
path: /data
# 此字段为可选
type: Directory

支持的 type 值如下:

取值 行为
空字符串(默认)用于向后兼容,这意味着在安装 hostPath 卷之前不会执行任何检查。
DirectoryOrCreate 如果在给定路径上什么都不存在,那么将根据需要创建空目录,权限设置为 0755,具有与 kubelet 相同的组和属主信息。
Directory 在给定路径上必须存在的目录。
FileOrCreate 如果在给定路径上什么都不存在,那么将在那里根据需要创建空文件,权限设置为 0644,具有与 kubelet 相同的组和所有权。
File 在给定路径上必须存在的文件。
Socket 在给定路径上必须存在的 UNIX 套接字。
CharDevice 在给定路径上必须存在的字符设备。
BlockDevice 在给定路径上必须存在的块设备。
1
2
3
4
5
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: example-vol-default
provisioner: vendor-name.example

local

local 卷只能用作静态创建的持久卷。不支持动态配置。

hostPath 卷相比,local 卷能够以持久和可移植的方式使用,而无需手动将 Pod 调度到节点。系统通过查看 PersistentVolume 的节点亲和性配置,就能了解卷的节点约束。

使用 local 卷时,你需要设置 PersistentVolume 对象的 nodeAffinity 字段。 Kubernetes 调度器使用 PersistentVolume 的 nodeAffinity 信息来将使用 local 卷的 Pod 调度到正确的节点。

PersistentVolume 对象的 volumeMode 字段可被设置为 “Block” (而不是默认值 “Filesystem”),以将 local 卷作为原始块设备暴露出来。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 100Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node

NFS映射

1

Minio

1

Ceph

1

准备环境

证书

[[../杂项/OpenSSL生成自签名证书|OpenSSL生成自签名证书]]

[[K3s证书管理|K3s证书管理]]

默认配置文件

1
helm show values harbor/harbor > harbor-values.yaml

安装

配置清单

harbor-value.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
expose:
type: ingress
tls:
enabled: true
certSource: secret
secret:
secretName: "example.io"
notarySecretName: "example.io"
ingress:
hosts:
core: harbor.example.io
notary: notary.example.io
controller: default
annotations:
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/proxy-body-size: "0"
kubernetes.io/ingress.class: "traefik"
traefik.ingress.kubernetes.io/router.tls: "true"
traefik.ingress.kubernetes.io/router.entrypoints: websecure

externalURL: https://harbor.example.io

harborAdminPassword: "Harbor123456"

logLevel: info

chartmuseum:
enabled: true

database:
type: external
external:
host: "postgres.devops.svc.cluster.local"
port: "5432"
username: "harbor"
password: "harbor"
redis:
type: external
external:
addr: "redis.devops.svc.cluster.local:6379"
password: "passwd"

harbor-ingress.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: kube-ops
name: harbor-http
spec:
entryPoints:
- websecure
tls:
secretName: all-xxxx-com
routes:
- match: Host(`harbor.example.com`) && PathPrefix(`/`)
kind: Rule
services:
- name: harbor-portal
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: kube-ops
name: harbor-api
spec:
entryPoints:
- websecure
tls:
secretName: all-xxxx-com
routes:
- match: Host(`harbor.example.com`) && PathPrefix(`/api/`)
kind: Rule
services:
- name: harbor-core
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: kube-ops
name: harbor-service
spec:
entryPoints:
- websecure
tls:
secretName: all-xxxx-com
routes:
- match: Host(`harbor.example.com`) && PathPrefix(`/service/`)
kind: Rule
services:
- name: harbor-core
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: kube-ops
name: harbor-v2
spec:
entryPoints:
- websecure
tls:
secretName: all-xxxx-com
routes:
- match: Host(`harbor.example.com`) && PathPrefix(`/v2`)
kind: Rule
services:
- name: harbor-core
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: kube-ops
name: harbor-chartrepo
spec:
entryPoints:
- websecure
tls:
secretName: all-xxxx-com
routes:
- match: Host(`harbor.example.com`) && PathPrefix(`/chartrepo/`)
kind: Rule
services:
- name: harbor-core
port: 80
---
apiVersion: traefik.containo.us/v1alpha1
kind: IngressRoute
metadata:
namespace: kube-ops
name: harbor-c
spec:
entryPoints:
- websecure
tls:
secretName: all-xxxx-com
routes:
- match: Host(`harbor.example.com`) && PathPrefix(`/c/`)
kind: Rule
services:
- name: harbor-core
port: 80

安装Harbor

1
2
3
4
5
6
7
# 添加Harbor仓库
helm repo add harbor https://helm.goharbor.io

# 使用部署或升级Harbor
helm upgrade harbor harbor/harbor --namespace harbor \
--install --create-namespace \
-f harbor-values.yaml

配置

配置library仓库源

1
2
3
4
5
6
7
kubectl edit configmap harobr-registry -n harbor

# 在auth: 后边添加新节点
proxy:
remoteurl: "https://registry-1.docker.io"


使用Harbor

配置镜像缓存

参考

Harbor 搭建镜像代理 | Northes

Kubernetes ≥ 1.25 Containerd配置Harbor私有镜像仓库_containerd登录镜像仓库-CSDN博客

结合Cert-Manager完成Harbor的Https证书自动签发 | 风格 | 风起于青萍之末 (lusyoe.github.io)

Containerd容器镜像管理-腾讯云开发者社区-腾讯云 (tencent.com)

通过helm在k8s上搭建Harbor - 简书 (jianshu.com)

Kubernetes 集群仓库 harbor Helm3 部署-腾讯云开发者社区-腾讯云 (tencent.com)

containerd基本使用命令 - 杨梅冲 - 博客园 (cnblogs.com)

Kubernetes1.21搭建harbor-腾讯云开发者社区-腾讯云 (tencent.com)

Helm部署

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
helm repo add kubernetes-dashboard https://kubernetes.github.io/dashboard/

#helm会使用kubectl默认的KUBECONFIG配置,这里我们需要将KUBECONFIG换成k3s的否则会链接失败。
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml

#注意要指定端口号等信息,方便访问
# service.type=NodePort 默认是ClusterIP只能本机访问
# service.nodePort=30080 指定访问端口号
# replicaCount=2 2个节点

# helm v2
helm upgrade --install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \
--create-namespace \
--namespace kubernetes-dashboard \
# --set service.type=NodePort \
# --set service.nodePort=30080 \
# --set replicaCount=2

# helm v3+
helm install kubernetes-dashboard kubernetes-dashboard/kubernetes-dashboard \
--create-namespace \
--namespace kubernetes-dashboard \
# --set service.type=NodePort \
# --set service.nodePort=30080 \
# --set replicaCount=2

配置远程访问

NodePort暴露端口

1
kubectl -n kubernetes-dashboard edit service kubernetes-dashboard-web
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
...
name: kubernetes-dashboard
namespace: kubernetes-dashboard
resourceVersion: "343478"
selfLink: /api/v1/namespaces/kubernetes-dashboard/services/kubernetes-dashboard
uid: 8e48f478-993d-11e7-87e0-901b0e532516
spec:
clusterIP: 10.100.124.90
externalTrafficPolicy: Cluster
ports:
- port: 443
protocol: TCP
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
sessionAffinity: None
# 修改开放端口方式
#type: ClusterIP
type: NodePort
status:
loadBalancer: {}

Traefik Ingress反向代理

创建证书请求文件dashboard-cert-manager.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: k3s-chemmy-io
namespace: default
spec:
secretName: k3s-chemmy-io-tls
issuerRef:
name: letsencrypt-staging
kind: ClusterIssuer
commonName: k3s.chemmy.io
dnsNames:
- k3s.sample.net

此配置文件是测试版,正式版参考K3s部署cert-manager

1
2
3
4
5
6
7
8
9
10
11
kubectl apply -f dashboard-cert-manager.yaml

# 查看
kubectl get certificates

# 如果状态不是READY,检查
kubectl describe certificates k3s-chemmy-io

# 清理证书
kubectl delete certificates k3s-chemmy-io
kubectl delete secrets k3s-chemmy-io-tls

配置账户

新建dashboard-admin.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# 创建ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: admin-user
namespace: kubernetes-dashboard
---
#创建clusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: admin-user
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: admin-user
namespace: kubernetes-dashboard

新建用户

1
2
# 部署用户资源
kubectl apply -f dashboard-admin.yaml

获取token

1
2
3
4
5
# v1.24+
sudo k3s kubectl -n kubernetes-dashboard create token admin-user

# v1.23 及之前的版本
sudo k3s kubectl -n kubernetes-dashboard describe secret admin-user-token | grep '^token'

Kubectl部署

最新版本只支持Helm,旧版本v2.7支持Kubectl部署

下载 recommended.yaml

1
kubectl apply -f recommended.yaml

修改recommended.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
...
---
kind: Service
apiVersion: v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
# 直接暴露端口使用NodePort,Traefik使用ClusterIP
#type: ClusterIP
type: NodePort
ports:
- port: 443
targetPort: 8443
selector:
k8s-app: kubernetes-dashboard
---
...

dashboard的默认webui证书是自动生成的,由于时间和名称存在问题,导致谷歌和ie浏览器无法打开登录界面,经过测试Firefox可以正常打开。解决证书问题参考Kubernetes Dashboard的安装与坑 - 简书 (jianshu.com)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---

# 注释内置自动生成的证书
#apiVersion: v1
#kind: Secret
#metadata:
# labels:
# k8s-app: kubernetes-dashboard
# name: kubernetes-dashboard-certs
# namespace: kubernetes-dashboard
#type: Opaque

---
...
---

kind: Deployment
apiVersion: apps/v1
metadata:
labels:
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kubernetes-dashboard
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
k8s-app: kubernetes-dashboard
template:
metadata:
labels:
k8s-app: kubernetes-dashboard
spec:
containers:
- name: kubernetes-dashboard
image: kubernetesui/dashboard:v2.1.0
imagePullPolicy: Always
ports:
- containerPort: 8443
protocol: TCP
args:
# 注释自动生成证书
# - --auto-generate-certificates
- --namespace=kubernetes-dashboard
# 添加证书配置及证书文件映射
- --token-ttl=3600
- --bind-address=0.0.0.0
- --tls-cert-file=tls.crt
- --tls-key-file=tls.key
# Uncomment the following line to manually specify Kubernetes API server Host
# If not specified, Dashboard will attempt to auto discover the API server and connect
# to it. Uncomment only if the default does not work.
# - --apiserver-host=http://my-address:port
volumeMounts:
- name: kubernetes-dashboard-certs
mountPath: /certs
# Create on-disk volume to store exec logs
- mountPath: /tmp
name: tmp-volume
livenessProbe:
httpGet:
scheme: HTTPS
path: /
port: 8443
initialDelaySeconds: 30
timeoutSeconds: 30
securityContext:
allowPrivilegeEscalation: false
readOnlyRootFilesystem: true
runAsUser: 1001
runAsGroup: 2001
volumes:
- name: kubernetes-dashboard-certs
secret:
secretName: kubernetes-dashboard-certs
- name: tmp-volume
emptyDir: {}
serviceAccountName: kubernetes-dashboard
nodeSelector:
"kubernetes.io/os": linux
# Comment the following tolerations if Dashboard must not be deployed on master
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule

---
...

生成证书文件 tls.crt,tls.csr,tls.key

1
2
3
4
5
6
7
8
9
# 生成key
openssl genrsa -out tls.key 2048
# 生成csr
openssl req -new -out tls.csr -key tls.key -subj '/CN=0.0.0.0'
# 生成
openssl x509 -req -in tls.csr -signkey tls.key -out tls.crt

#创建secret
kubectl create secret generic kubernetes-dashboard-certs --from-file=tls.crt --from-file=tls.key -n kubernetes-dashboard

subj子参数解释

缩写 翻译 英文对照
C 国家名称缩写 Country Name (2 letter code)
ST 州或省名称 State or Province Name (full name)
L 城市或区域称 Locality Name (eg, city)
O 组织名(或公司名) Organization Name (eg, company)
OU 组织单位名称(或部门名) Organizational Unit Name (eg, section)
CN 服务器域名/证书拥有者名称 Common Name (e.g. server FQDN or YOUR name)
emailAddress 邮件地址 Email

参考

k3s集群单节点部署与集群内DashBoard部署 - 知乎 (zhihu.com)

k3s集群搭建并安装kubernetes-dashboard - 东峰叵,com - 博客园 (cnblogs.com)

[K8S 快速入门(十九)通过Helm 安装 Kubernetes Dashboard_helm安装dashboard ingress-CSDN博客](https://blog.csdn.net/weixin_41947378/article/details/111661539#:~:text=通过Helm 安装 Kubernetes Dashboard 1 1. 下载 %23,外网访问 %23 将svc的ClusterIP改为NotePort,外网访问 … 5 5. 令牌方式登录仪表盘)

使用 traefik ingress暴露kubernetes-dashbord - HTTPS版本_svclb-traefik-CSDN博客

Kubernetes dashboardv2.7.0安装指南:从零开始搭建可视化界面 - 知乎

常用国内源

1
2
3
4
5
6
7
https://pypi.tuna.tsinghua.edu.cn/simple

https://mirrors.ustc.edu.cn/pypi/web/simple # 已暂时移除并重定向到 BFSU PyPI

https://mirrors.aliyun.com/pypi/simple/

http://mirrors.cloud.tencent.com/pypi/simple

查看当前镜像地址

1
2
3
4
5
pip config list

# 查看数据中对应路径
global.index-url = 'https://pypi.tuna.tsinghua.edu.cn/simple'
install.trusted-host = 'https://pypi.tuna.tsinghua.edu.cn'

临时使用

1
pip install numpy -i https://pypi.tuna.tsinghua.edu.cn/simple

全局修改

1
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple

换回默认

1
pip config unset global.index-url

在K3s集群中部署Redis,需依托Kubernetes资源清单,完成组件配置、存储挂载、网络访问等全流程操作。本文基于实际可直接复用的YAML配置文件,详细拆解Redis在K3s环境中的部署步骤,解析核心配置要点,确保部署过程规范、可落地。

一、环境说明

本次部署基于K3s集群,选用Redis 6.0.8版本,所有相关资源统一部署在redis命名空间下(需提前执行命令kubectl create namespace redis完成命名空间创建)。部署采用StatefulSet控制器保障Redis实例的稳定运行,结合PersistentVolumeClaim(PVC)实现数据持久化,通过IngressRouteTCP暴露外部访问入口,借助ConfigMap统一管理Redis配置文件,形成完整的部署闭环。

二、核心配置文件与部署步骤

1. 配置Redis配置文件(ConfigMap)

创建redis-config.yaml文件,通过ConfigMap存储Redis核心配置参数,涵盖网络绑定、数据持久化策略、密码认证等关键配置,确保Redis运行参数可灵活调整、统一管理,具体配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
apiVersion: v1
kind: ConfigMap
metadata:
name: redis-config
namespace: redis
labels:
app: redis
data:
redis.conf: |
daemonize no
bind 0.0.0.0
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
pidfile /data/redis-server.pid
logfile /data/redis.log
loglevel notice
databases 16
always-show-logo yes
save 900 1
save 300 10
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
dir /data
slave-serve-stale-data yes
slave-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-disable-tcp-nodelay no
slave-priority 100
appendonly yes
appendfilename "appendonly.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit slave 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
aof-rewrite-incremental-fsync yes
requirepass passwd

配置文件创建完成后,执行以下命令部署ConfigMap:

1
kubectl apply -f redis-config.yaml

2. 配置持久化存储(PV/PVC)

K3s集群默认内置local-path存储类,可满足日常Redis数据持久化需求。本次部署提供两种PVC配置方式,用户可根据实际存储场景灵活选择。

方式1:仅创建PVC(基于local-path存储类)

创建redis-pvc-local-path.yaml文件,申请2Gi存储容量,访问模式设为ReadWriteOnce(仅允许单个节点读写),适配单节点Redis部署场景,配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pvc
namespace: redis
labels:
app: redis
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi

执行部署命令:

1
kubectl apply -f redis-pvc-local-path.yaml

方式2:手动创建PV+PVC(hostPath模式)

若需自定义存储路径,可创建redis-pvc-host-path.yaml文件,手动定义PersistentVolume(PV)和PVC。其中PV指定宿主机/mnt/redis/data路径作为存储目录,容量设为5Gi;PVC关联local-path存储类,申请2Gi存储容量,配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
namespace: redis
labels:
app: redis
spec:
storageClassName: redis-persistent-storage
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
hostPath:
path: /mnt/redis/data

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-pvc
labels:
app: redis
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi

执行部署命令:

1
kubectl apply -f redis-pvc-host-path.yaml

3. 部署Redis StatefulSet

创建redis-deployment.yaml文件,采用StatefulSet控制器管理Redis Pod,配置包含初始化容器(调整内核参数)、存储卷挂载、配置文件挂载、启动命令指定等核心内容,确保Redis实例稳定启动,配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
namespace: redis
labels:
app: redis
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
initContainers:
- name: init-0
image: busybox
imagePullPolicy: IfNotPresent
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
command: [ "sysctl", "-w", "net.core.somaxconn=511" ]
securityContext:
privileged: true
# - name: init-1
# image: busybox
# imagePullPolicy: IfNotPresent
# terminationMessagePath: /dev/termination-log
# terminationMessagePolicy: File
# command: [ "sh", "-c", "echo never > /sys/kernel/mm/transparent_hugepage/enabled" ]
# securityContext:
# privileged: true
containers:
- name: redis
image: redis:6.0.8
imagePullPolicy: IfNotPresent
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- name: redis-persistent-storage
mountPath: /var/lib/redis
- name: redis-config
mountPath: /usr/local/etc/redis/redis.conf
subPath: redis.conf
command: [ "redis-server" ,"/usr/local/etc/redis/redis.conf" ]
env:
- name: TZ
value: "Asia/Shanghai"
volumes:
- name: timezone
hostPath:
path: /usr/share/zoneinfo/Asia/Shanghai
- name: redis-persistent-storage
persistentVolumeClaim:
claimName: redis-pvc
- name: redis-config
configMap:
name: redis-config

执行部署命令,启动Redis StatefulSet:

1
kubectl apply -f redis-deployment.yaml

4. 配置Service暴露集群内访问

创建redis-service.yaml文件,定义ClusterIP类型Service,暴露Redis 6379端口,供K3s集群内其他应用程序访问Redis服务,配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: redis
labels:
app: redis
spec:
ports:
- name: redis
port: 6379
targetPort: 6379
protocol: TCP
selector:
app: redis
type: ClusterIP

执行部署命令:

1
kubectl apply -f redis-service.yaml

5. 配置IngressRouteTCP暴露外部访问

基于K3s默认集成的Traefik IngressController,创建redis-ingress.yaml文件,通过IngressRouteTCP暴露Redis的6379端口,实现集群外部对Redis服务的访问,配置如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
apiVersion: traefik.io/v1alpha1
kind: IngressRouteTCP
metadata:
name: redis-tcp
namespace: redis
labels:
app: redis
spec:
entryPoints:
- redis
routes:
- match: HostSNI(`*`)
services:
- name: redis
port: 6379

执行部署命令:

1
kubectl apply -f redis-ingress.yaml

注意:需提前在Traefik中配置名为redis的entryPoint(监听6379端口),否则IngressRouteTCP无法正常生效。

三、部署验证

Redis相关资源部署完成后,需通过以下命令检查资源状态,并验证服务可正常访问。

1. 检查资源状态

1
2
3
4
5
6
# 检查StatefulSet运行状态
kubectl get statefulset -n redis
# 检查Redis Pod运行状态
kubectl get pods -n redis
# 检查PVC绑定状态
kubectl get pvc -n redis

若所有资源状态均为Running/Bound,说明部署成功。

2. 访问Redis服务

  • 集群内访问:通过集群内部服务地址redis.redis.svc.cluster.local:6379访问,访问密码为配置文件中设置的passwd

  • 集群外访问:通过Traefik节点IP+6379端口访问,访问密码同样为passwd

四、关键配置说明

  • 内核参数调整:通过初始化容器(initContainers)执行sysctl -w net.core.somaxconn=511,提升Redis网络连接上限,避免因连接数不足导致服务异常。

  • 数据持久化:通过PVC挂载/var/lib/redis目录,结合Redis配置中的RDB和AOF双重持久化策略,确保数据不丢失;hostPath模式下,数据实际存储在宿主机/mnt/redis/data目录,local-path模式下存储在集群默认存储路径。

  • 配置管理:通过ConfigMap挂载redis.conf配置文件,后续需修改Redis运行参数时,仅需更新ConfigMap并重启Pod即可,无需重建镜像,提升维护效率。

  • 网络安全:采用ClusterIP类型Service限制集群内访问,通过IngressRouteTCP实现外部可控访问,同时Redis配置密码认证,多重保障服务安全。

五、总结

本文基于实际生产可用的YAML配置文件,完成了K3s集群中Redis的标准化部署,覆盖配置管理、数据持久化、网络访问、部署验证等核心环节。所有配置均保持原始内容不变,可直接复用至K3s环境,无需额外修改,能够满足中小规模业务场景下Redis的稳定运行需求,为K3s环境中Redis的部署提供了可落地的实践参考。

0%