Installation and Deployment
Quick Deployment
You can use quick deployment to pull up a usable csghub environment in minutes.
# <domain>: like example.com
curl -sfL https://raw.githubusercontent.com/OpenCSGs/csghub-installer/refs/heads/main/helm-chart/install.sh | bash -s -- <domain>
# If enable Nvidia GPU
curl -sfL https://raw.githubusercontent.com/OpenCSGs/csghub-installer/refs/heads/main/helm-chart/install.sh | ENABLE_NVIDIA_GPU=true bash -s -- <domain>
Manual Deployment
### Installing KNative Serving
- Reference: Install Knative Serving using YAML files
Note: If the target cluster for final deployment is not this Kubernetes cluster, install KNative Serving on the target cluster.
KNative Serving is a required component for CSGHub to create applications such as Space. If you are deploying in a cloud environment, consider using similar components provided by your cloud service.
Install Core Components
Install Custom Resources
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.15.2/serving-crds.yaml
# If pulling from gcr.io fails, use the following command
kubectl apply -f https://raw.githubusercontent.com/OpenCSGs/CSGHub-helm/main/knative/serving-crds.yamlInstall Core Components
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.15.2/serving-core.yaml
# If pulling from gcr.io fails, use the following command
kubectl apply -f https://raw.githubusercontent.com/OpenCSGs/CSGHub-helm/main/knative/serving-core.yaml
Install Networking Components
We will use Kourier
as the default networking components.
Install the kourier controller
kubectl apply -f https://github.com/knative/net-kourier/releases/download/knative-v1.15.1/kourier.yaml
# If pulling from gcr.io fails, use the following command
kubectl apply -f https://raw.githubusercontent.com/OpenCSGs/CSGHub-helm/main/knative/kourier.yamlConfigure Knative Serving to use Kourier as the default networking compoenent
kubectl patch configmap/config-network \
--namespace knative-serving \
--type merge \
--patch '{"data":{"ingress-class":"kourier.ingress.networking.knative.dev"}}'Retrieve external access address
# Use NodePort (if needed)
kubectl patch service kourier -n kourier-system -p '{"spec": {"type": "NodePort"}}'
# Check the resource status
kubectl --namespace kourier-system get service kourierNote: If you are using a local Kubernetes cluster, change the service type to NodePort.
Verify Installation
kubectl get pods -n knative-serving
Configure DNS
Typically, Knative Serving can resolve internal addresses using Magic DNS
or Real DNS
. However, due to multi-cluster management, only Real DNS
configuration is supported here.
# Replace knative.example.com with your domain suffix
kubectl patch configmap/config-domain \
--namespace knative-serving \
--type merge \
--patch '{"data":{"app.internal":""}}'
Autoscaling
kubectl apply -f https://github.com/knative/serving/releases/download/knative-v1.15.2/serving-hpa.yaml
# If pulling from gcr.io fails, use the following command
kubectl apply -f https://raw.githubusercontent.com/OpenCSGs/CSGHub-helm/main/knative/serving-hpa.yaml
Create KubeConfig Secret
Since CSGHub requires the ability to connect to multiple Kubernetes clusters, the connection can only be made through the .kube/config file, and not through serviceAccount. To ensure the security of the .kube/config file, you will need to manually create secrets and provide them to helm.
Before creating the secrets, place all the config files from the different Kubernetes clusters you are connecting to in the target directory, such as the .kube directory. You can give different names to the config files, for example by numbering them.
kubectl create ns csghub
kubectl create secret generic kube-configs --from-file=/root/.kube/ --namespace=csghub
The above command will create a kube-configs
Secret containing all the config files in the .kube directory.
Install CSGHub Helm Chart
Ensure the previous steps are completed before proceeding with the following steps.
Add helm repository
helm repo add csghub https://opencsgs.github.io/CSGHub-helm
helm repo updateInstall Chart
By default, services are exposed using NodePort as most local testing environments do not support LoadBalancer.
# global.ingress.hosts: Replace with your own second-level domain name
# global.builder.internal[0].domain: The internal domain name configured above
# global.builder.internal[0].service.host: The external address of the kourier service
# global.builder.internal[0].service.port: Kourier service external port
helm install csghub csghub/csghub \
--namespace csghub \
--create-namespace \
--set global.ingress.hosts=example.com \
--set global.builder.internal[0].domain=app.internal \
--set global.builder.internal[0].service.host=192.168.18.18 \
--set global.builder.internal[0].service.port=30463Note:
Once the resources are ready, you can log in to CSGHub according to the instructions output by helm. It should be noted that some features are not fully included in the current helm chart due to their complexity, such as model inference and model fine-tuning. These features are enabled but may require additional configuration for the instances to function properly. For more details, please contact the engineers.
If you are using an external self-signed certificate Container Registry (non-encrypted registries are not supported), follow the steps in Tag Resolution to further configure KNative Serving. This step will be encapsulated in the helm chart in the future.
The detailed steps are as follows:
Create a TLS secret that contains the self-signed CA certificate
kubectl -n knative-serving create secret generic customca --from-file=ca.crt=/root/ca.crt
# If using an internal registry, this step also needs to be performed manually.
kubectl -n csghub get secret csghub-registry-tls-secret -ojsonpath='{.data.ca\.crt}' | base64 -d > ca.crt
kubectl -n knative-serving create secret generic customca --from-file=ca.crt=/root/ca.crtPatch KNative Serving Controller Deploymnet
kubectl -n knative-serving patch deploy controller -p '[
{
"op": "add",
"path": "/spec/template/spec/containers/0/env/-",
"value": {
"name": "SSL_CERT_DIR",
"value": "/opt/certs/x509"
}
},
{
"op": "add",
"path": "/spec/template/spec/containers/0/volumeMounts/-",
"value": {
"name": "custom-certs",
"mountPath": "/opt/certs/x509"
}
},
{
"op": "add",
"path": "/spec/template/spec/volumes/-",
"value": {
"name": "custom-certs",
"secret": {
"secretName": "customca"
}
}
}
]' --type=json
Post-install Configuration
The helm chart includes a simple Container Registry for testing purposes, but it does not provide secure encrypted access. You still need to configure more settings to pull images from the Registry. For production environments, you should prepare your own Registry.
Configure containerd to allow access to non-secure Registry.
Before configuring, ensure that the file
/etc/containerd/config.toml
exists. If it doesn’t, you can create it using the following command.mkdir -p /etc/containerd/ && containerd config default >/etc/containerd/config.toml
Configure config_path
- containerd 2.x
version = 3
[plugins."io.containerd.cri.v1.images".registry]
config_path = "/etc/containerd/certs.d"- Containerd 1.x
version = 2
[plugins."io.containerd.grpc.v1.cri".registry]
config_path = "/etc/containerd/certs.d"After this configuration, restart the
containerd
service.Config hosts.toml
mkdir /etc/containerd/certs.d/registry.example.com:32500 # This port is the NodePort in the helm, and can be modified via --set global.registry.service.nodePort=32500
cat <<EOF > /etc/containerd/certs.d/registry.example.com:32500/hosts.toml
server = "https://registry.example.com:5000"
[host."http://192.168.170.22:5000"]
capabilities = ["pull", "resolve", "push"]
skip_verify = true
EOFNote: This configuration takes effect directly without rebooting
Test configration
ctr images pull --hosts-dir "/etc/containerd/certs.d" registry.example.com:5000/image_name:tag