Skip to main content

Supported Operating Systems

CSGHub-Lite runs on a variety of operating systems and hardware platforms. This document lists the tested and supported environments.

Operating System Support

Operating SystemSupport StatusArchitectureNotes
Ubuntu✅ Supportedx86_64, ARM64LTS versions recommended (e.g., 20.04, 22.04, 24.04)
Debian✅ Supportedx86_64, ARM64Stable releases recommended
CentOS✅ Supportedx86_64, ARM64CentOS 7/8/9 Stream supported
macOS✅ Supportedx86_64, Apple SiliconSupports both Intel Macs and Apple M series chips
Windows✅ Supportedx86_64Windows 10/11 supported, PowerShell recommended

Hardware Accelerator Support

CSGHub-Lite supports various hardware accelerators to enhance inference performance:

HardwareSupport StatusNotes
Mac mini✅ SupportedApple Silicon with native Metal acceleration
AMD Instinct MI50✅ SupportedAMD ROCm support
AMD Instinct MI200 / MI250✅ SupportedAMD ROCm support
NVIDIA GPU Series✅ SupportedCUDA support, CUDA-capable GPUs recommended

System Requirements

Minimum Requirements

  • CPU: 4 cores or more
  • Memory: 8 GB RAM (for running small models like Qwen3-0.6B)
  • Storage: 10 GB available disk space
  • Network: Stable internet connection (for downloading models)
  • CPU: 8 cores or more
  • Memory: 16 GB RAM or more (for running larger models)
  • GPU: NVIDIA GPU (CUDA support) or Apple Silicon Mac
  • Storage: 50 GB or more available disk space (for storing multiple models)

Dependencies

Required Dependencies

  • llama-server: Inference backend, automatically installed by the installation script

Optional Dependencies

  • Python 3.10+: For SafeTensors model conversion
  • PyTorch: Required for SafeTensors conversion
  • safetensors: SafeTensors format support
  • transformers: Model conversion support

Install optional dependencies:

pip3 install torch safetensors gguf transformers

Installation Verification

After installation, verify system support with the following commands:

# Check version information
csghub-lite --version

# Test with a small model
csghub-lite run Qwen/Qwen3-0.6B-GGUF

Troubleshooting

Windows Installation Fails

Ensure you run the installation script in PowerShell with administrator privileges. If you encounter execution policy restrictions, run:

Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser

Missing Dependencies on Linux

If the installation script fails to install llama-server automatically, install it manually:

# Ubuntu/Debian
sudo apt-get update
sudo apt-get install -y llama.cpp

# Or build from source
git clone https://github.com/ggml-org/llama.cpp
cd llama.cpp && make

Apple Silicon Support Issues on macOS

Ensure Xcode Command Line Tools are installed:

xcode-select --install

GPU Acceleration Not Working

  • NVIDIA GPU: Ensure the latest CUDA drivers and toolkit are installed
  • AMD GPU: Ensure ROCm support is installed
  • Apple Silicon: Metal acceleration is enabled by default, no additional configuration needed

Deprecated System Versions

The following system versions are no longer officially supported:

SystemLast Supported VersionNotes
Windows 7/8-Not supported, please upgrade to Windows 10/11
macOS 10.x-macOS 11 (Big Sur) or later required
Ubuntu 18.04-Please upgrade to an LTS version

Feedback and Support

If you encounter issues on a specific system, please get help through: