Supported Operating Systems
CSGHub-Lite runs on a variety of operating systems and hardware platforms. This document lists the tested and supported environments.
Operating System Support
| Operating System | Support Status | Architecture | Notes |
|---|---|---|---|
| Ubuntu | ✅ Supported | x86_64, ARM64 | LTS versions recommended (e.g., 20.04, 22.04, 24.04) |
| Debian | ✅ Supported | x86_64, ARM64 | Stable releases recommended |
| CentOS | ✅ Supported | x86_64, ARM64 | CentOS 7/8/9 Stream supported |
| macOS | ✅ Supported | x86_64, Apple Silicon | Supports both Intel Macs and Apple M series chips |
| Windows | ✅ Supported | x86_64 | Windows 10/11 supported, PowerShell recommended |
Hardware Accelerator Support
CSGHub-Lite supports various hardware accelerators to enhance inference performance:
| Hardware | Support Status | Notes |
|---|---|---|
| Mac mini | ✅ Supported | Apple Silicon with native Metal acceleration |
| AMD Instinct MI50 | ✅ Supported | AMD ROCm support |
| AMD Instinct MI200 / MI250 | ✅ Supported | AMD ROCm support |
| NVIDIA GPU Series | ✅ Supported | CUDA support, CUDA-capable GPUs recommended |
System Requirements
Minimum Requirements
- CPU: 4 cores or more
- Memory: 8 GB RAM (for running small models like Qwen3-0.6B)
- Storage: 10 GB available disk space
- Network: Stable internet connection (for downloading models)
Recommended Requirements
- CPU: 8 cores or more
- Memory: 16 GB RAM or more (for running larger models)
- GPU: NVIDIA GPU (CUDA support) or Apple Silicon Mac
- Storage: 50 GB or more available disk space (for storing multiple models)
Dependencies
Required Dependencies
- llama-server: Inference backend, automatically installed by the installation script
- macOS:
brew install llama.cpp - Linux/Windows: Download from llama.cpp releases
- macOS:
Optional Dependencies
- Python 3.10+: For SafeTensors model conversion
- PyTorch: Required for SafeTensors conversion
- safetensors: SafeTensors format support
- transformers: Model conversion support
Install optional dependencies:
pip3 install torch safetensors gguf transformers
Installation Verification
After installation, verify system support with the following commands:
# Check version information
csghub-lite --version
# Test with a small model
csghub-lite run Qwen/Qwen3-0.6B-GGUF
Troubleshooting
Windows Installation Fails
Ensure you run the installation script in PowerShell with administrator privileges. If you encounter execution policy restrictions, run:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
Missing Dependencies on Linux
If the installation script fails to install llama-server automatically, install it manually:
# Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y llama.cpp
# Or build from source
git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp && make
Apple Silicon Support Issues on macOS
Ensure Xcode Command Line Tools are installed:
xcode-select --install
GPU Acceleration Not Working
- NVIDIA GPU: Ensure the latest CUDA drivers and toolkit are installed
- AMD GPU: Ensure ROCm support is installed
- Apple Silicon: Metal acceleration is enabled by default, no additional configuration needed
Deprecated System Versions
The following system versions are no longer officially supported:
| System | Last Supported Version | Notes |
|---|---|---|
| Windows 7/8 | - | Not supported, please upgrade to Windows 10/11 |
| macOS 10.x | - | macOS 11 (Big Sur) or later required |
| Ubuntu 18.04 | - | Please upgrade to an LTS version |
Feedback and Support
If you encounter issues on a specific system, please get help through: