Prerequisites
Please make sure to read this section carefully to avoid deployment issues.
Version Requirements
- Kubernetes version 1.20+
- Helm version 3.8.0+
- Dynamic PV storage
Kubernetes Cluster
Note: The following cluster deployment methods are not recommended for production environments. For production, use managed Kubernetes clusters from cloud providers like ACK, or deploy a standard Kubernetes cluster.
- Recommended server configuration: 8c16g * 1
- Server architecture: Preferably x86_64
This guide does not provide detailed steps for installing a cluster but offers general instructions. You can use the following options to set up the basic environment:
Docker Desktop
If you already have Docker Desktop installed, you can quickly set up a Kubernetes test environment as follows:
Open the Dashboards
Click Settings
Select Kubernetes from the left sidebar
Check Enable Kubernetes
Note: Once enabled, the message
Kubernetes running
will appear at the bottom left of the dashboard.
# Install the cluster
curl -sfL https://rancher-mirror.rancher.cn/k3s/k3s-install.sh | INSTALL_K3S_MIRROR=cn INSTALL_K3S_VERSION=v1.30.4+k3s1 sh -
# Copy .kube/config
mkdir ~/.kube
cp /etc/rancher/k3s/k3s.yaml .kube/config
chmod 0400 .kube/config
Helm
There are two options for installing Helm:
# Download the installation script
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
# Run the installation
chmod 700 get_helm.sh && ./get_helm.sh
# Verify the installation
helm versionOther
# Install
snap install helm --classic
# Verify the installation
helm version
Dynamic Storage
If no other dynamic PV solution is available and this is just for local testing, you can use the following method.
Create a StorageClass
# Create the namespace
kubectl create ns kube-storage
# Create the storageclass
cat <<EOF | kubectl apply -f -
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: fast-disks
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
allowVolumeExpansion: true
EOFInstall local-volume-provisioner
# Add the chart repository
helm repo add sig-storage-local-static-provisioner https://kubernetes-sigs.github.io/sig-storage-local-static-provisioner
# Update the repository
helm repo update
# Generate the resource file
helm template --debug sig-storage-local-static-provisioner/local-static-provisioner --namespace kube-storage | sed 's/registry.k8s.io/opencsg-registry.cn-beijing.cr.aliyuncs.com\/opencsg_public/g'> local-volume-provisioner.generated.yaml
# Apply the resource file
kubectl apply -f local-volume-provisioner.generated.yamlCreate virtual disk mount points
for flag in {a..z}; do
mkdir -p /mnt/fake-disks/sd${flag} /mnt/fast-disks/sd${flag} 2>/dev/null
mount --bind /mnt/fake-disks/sd${flag} /mnt/fast-disks/sd${flag}
echo "/mnt/fake-disks/sd${flag} /mnt/fast-disks/sd${flag} none bind 0 0" >> /etc/fstab
doneCreate a test Pod
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: local-volume-example
namespace: default
spec:
serviceName: "local-volume-example-service"
replicas: 1 # Number of instances
selector:
matchLabels:
app: local-volume-example
template:
metadata:
labels:
app: local-volume-example
spec:
containers:
- name: local-volume-example
image: busybox:latest
ports:
- containerPort: 80
volumeMounts:
- name: example-storage
mountPath: /data
volumeClaimTemplates:
- metadata:
name: example-storage
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi
EOF
At this point, when a StatefulSet requests a PVC, PVs will be automatically created and bound. However, unlike cloud storage, this method does not strictly control PV size but offers a convenient way to avoid manual PV creation.