| Name | Price | CPU | GPU | Memory | Storage | Display ports | Extension slots | Connectivity | Wireless | Dimensions | Max TFLOPS | Tokens/sec (120B Q4) | Tokens / € | Tokens / W | Clustering |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Framework Desktop | €3591 | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 1TB, 2× NVMe PCIe 4.0 ×4 M.2 2280, max 8TB each | 1× HDMI v2.1, 2× DisplayPort v1.4 (8K@60Hz) | 1× PCIe 4.0 ×4 slot (max 50GbE) | RJ45 5Gbit (Realtek RTL8126), 2× USB4-C, 2× USB-A 3.2 Gen 2 | WiFi 7 (AMD RZ717) | 123.7×123.1×54.6mm | ~50–60 FP16 | 12–15 | 0.0055–0.0075 | 0.10–0.14 | Either Ethernet (bad), or adding 50GbE QSFP28 PCIe adapter + cables |
| Bosgame M5 AI | €2080 (pre-order $1699) | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 2TB, 2× NVMe PCIe 4.0 ×4 M.2 2280, max 8TB each | 1× HDMI v2.1, 1× DisplayPort v1.4 (8K@60Hz) | RJ45 2.5Gbit, 2× USB4-C, 3× USB-A 3.2 Gen 2, 2× USB-A 2.0, SD card | WiFi 7, Bluetooth 5.2 (in M.2 2230 key-E PCIe 3.0 slot) | | |
|||||
| GEEKOM A9 Mega AI Mini PC | $3200 / €3500 (kickstarter $1899) | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 2TB, 2× NVMe PCIe 4.0 ×4 M.2 2280, max 8TB each | 2× HDMI v2.1 (8K@60Hz) | 2× RJ45 2.5Gbit, 2× USB4-C (support Display Port 2.1), 2× USB-C, 3× USB-A 3.2 Gen 2, SD card | WiFi 7, Bluetooth 5.4 (in M.2 2230 key-E PCIe 3.0 slot) | 171×171×71mm | |
|||||
| GMKtec EVO-X2 | €3000 | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 2TB, 2× NVMe PCIe 4.0 ×4 M.2 2280, max 8TB each | 1× HDMI v2.1, 1× DisplayPort v1.4 (8K@60Hz) | RJ45 2.5Gbit, 2× USB4-C, 3× USB-A 3.2 Gen 2, 2× USB-A 2.0, SD card | WiFi 7, Bluetooth 5.4 (in M.2 2230 key-E PCIe 3.0 slot) | 193×185.8×77mm | |
|||||
| Beelink GTR9 Pro | €3000 | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 2TB, 2× NVMe PCIe 4.0 ×4 M.2 2280, max 8TB each | 1× HDMI v2.1 (8K@60Hz) | 2× RJ45 40Gbit, 3× USB4-C, 2× USB-A 3.2 Gen 2, 2× USB-A 2.0, SD card | WiFi 7 (MT7925), Bluetooth 5.4 (in M.2 2230 key-E PCIe 3.0 slot) | 180×180×90.8mm | Maybe 2×40Gbit = 10MB/s |
|||||
| FEVM FA-EX9 | €2950 | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 2TB, NVMe PCIe 4.0 ×4 + 1× M.2 2280, max 8TB each | 1× HDMI v2.1, 1× DisplayPort v1.4 (8K@60Hz) | OCuLink 64Gb/s | RJ45 2.5Gbit, 2× USB4-C, 3× USB-A 3.2 Gen 2, 2× USB-A 2.0, SD card | WiFi 7 (MT7925), Bluetooth 5.3 (in M.2 2230 key-E PCIe 3.0 slot) | 192×190×55mm | OCuLink | ||||
| MINISFORUM MS-S1 MAX | €3120 | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 2TB, NVMe PCIe 4.0 ×4 + 1× M.2 2280, max 8TB each | 1× HDMI v2.1 (8K@60Hz) | 1× PCIe 4.0 ×4 slot | 2× RJ45 10Gbit (Realtek RTL8127), 2× USB4v2-C (80Gb/s, Display Port 2.0, PD out 15W), 2× USB4-C (40Gb/s, Display Port 2.0, PD out 15W), 3× USB-A 3.2 Gen 2, 2× USB-A 2.0, SD card | WiFi 7 (MT7925), Bluetooth 5.4 (in M.2 2230 key-E PCIe 3.0 slot) | 192×190×55mm | See above using MCX416A-CCAT Mellanox ConnectX-4 2× 50Gbit/s QSFP28 ($146) | ||||
| MINISFORUM N5 Max | €? | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | barebone, NVMe PCIe 4.0 ×4 + 3× M.2 2280 ×1, max 8TB each, 5× 3.5″/2.5″ SATA drives | 1× HDMI v2.1 (8K@60Hz) | 1× PCIe 4.0 ×4 slot | 2× RJ45 10Gbit, 2× USB4v2-C (80Gb/s), 1× USB4-C (40Gb/s), 2× USB-A 3.2 Gen 2, 1× USB-A 2.0, SD card | WiFi 7 (MT7925), Bluetooth 5.4 (in M.2 2230 key-E PCIe 3.0 slot) | 199×202×252mm | OCuLink (external PCIe 4.0 ×4) |
NVidia chipset:
# docker compose exec openclaw-gateway /app/openclaw.mjs models list --provider kilocode Model Input Ctx Local Auth Tags kilocode/kilo-auto/free text 977k no yes default kilocode/kilo/auto text+image 977k no yes configured,alias:Kilo Gateway
openclaw user: useradd -m -u 987 -g 1001 -s /usr/sbin/nologin openclaw && mkdir -m 700 /home/openclaw && chown openclaw:openclaw /home/openclawgateway.auth.token in /home/openclaw/.openclaw/openclaw.json matches the one in docker-compose.yaml (openclaw-gateway.environment.OPENCLAW_GATEWAY_TOKEN).docker compose up -d openclaw-gateway / docker compose down# docker compose exec openclaw-gateway bash $ /app/openclaw.mjs devices list Direct scope access failed; using local fallback. Pending (1) ┌──────────────────────────────────────┬───────────────────────────────────────────────────┬──────────┬───────────────┬──────────┬────────┐ │ Request │ Device │ Role │ IP │ Age │ Flags │ ├──────────────────────────────────────┼───────────────────────────────────────────────────┼──────────┼───────────────┼──────────┼────────┤ │ 4ec7a9d0-f5f2-4316-9c07-f9e4fa96620a │ f74fdd392ee3bca006e344a86554ef76b473d7e2bd8502fde │ operator │ │ just now │ │ └──────────────────────────────────────┴───────────────────────────────────────────────────┴──────────┴───────────────┴──────────┴────────┘ $ /app/openclaw.mjs devices approve 4ec7a9d0-f5f2-4316-9c07-f9e4fa96620a Direct scope access failed; using local fallback. Approved f74fdd392ee3bca006e344a86554ef76b473d7e2bd8502fde (4ec7a9d0-f5f2-4316-9c07-f9e4fa96620a)
Control UI requires gateway.controlUi.allowedOrigins (set explicit origins), or set gateway.controlUi.dangerouslyAllowHostHeaderOriginFallback=true to use Host-header origin fallback
mode then add the following to .openclaw/openclaw.json:
"gateway": {
"controlUi": {
"allowedOrigins": [
"http://localhost"
]
}
}and force that host in Apache configuration:
RequestHeader set Origin "http://localhost:18789"
ProxyAddHeaders Off
–bind and –port]]) and configure those via .openclaw/openclaw.json
"gateway": {
"port": 18789,
"mode": "local",
"bind": "loopback"
}
Then check logs:
openclaw-gateway-1 | 2026-04-05T17:55:40.225+00:00 [canvas] host mounted at http://127.0.0.1:18789/__openclaw__/canvas/ (root /home/node/.openclaw/canvas) openclaw-gateway-1 | 2026-04-05T17:55:40.296+00:00 [gateway] listening on ws://127.0.0.1:18789, ws://[::1]:18789 (PID 7)
# cat - > docker/Dockerfile FROM alpine/openclaw:2026.4.2 USER root RUN apt-get update -q \ && apt-get install -y -q --no-install-recommends \ python3-bs4 \ python3-dateutil \ python3-lxml \ python3-requests \ python3-yaml \ && rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/* USER openclaw # docker build -t openclaw-custom:2026.4.2 .
then use openclaw-custom:2026.4.2 in docker-compose.yaml.
docker-compose.yaml: services: ollama: image: ollama/ollama:latest user: "987:1001" network_mode: host volumes: - /home/openclaw/.ollama:/.ollama
# docker compose up -d ollama # docker compose exec ollama ollama pull llama3.1:8b-instruct-q4_K_M # docker compose exec ollama ollama run llama3.1:8b-instruct-q4_K_M
.openclaw/openclaw.json looks like this:
"models": {
"providers": {
"vllm": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "dummy-api-key",
"api": "openai-completions",
"models": [
{
"id": "ollama/llama3.1:8b-instruct-q4_K_M",
"name": "ollama/llama3.1:8b-instruct-q4_K_M",
"reasoning": false,
...
docker compose exec openclaw-gateway /app/openclaw.mjs skills install qmd-external and add npm install -g @tobilu/qmd to Dockerfile..openclaw/.env: XDG_CONFIG_HOME=/home/node/.openclaw/.config XDG_CACHE_HOME=/home/node/.openclaw/.cache
$ set -a; source ~/.openclaw/.env; set +a
$ qmd collection add ~/.openclaw/workspace --name workspace --mask "**/*.md"
$ qmd status
QMD Status
Index: /home/node/.openclaw/.cache/qmd/index.sqlite
Size: 904.0 KB
Documents
Total: 77 files indexed
Vectors: 0 embedded
Pending: 77 need embedding (run 'qmd embed')
Updated: 2h ago
AST Chunking
Status: active
Languages: typescript, tsx, javascript, python, go, rust
Collections
workspace (qmd://workspace/)
Pattern: **/*.md
Files: 77 (updated 2h ago)
Device
GPU: none (running on CPU — models will be slow)
Tip: Install CUDA, Vulkan, or Metal support for GPU acceleration.
CPU: 4 math coresdocker compose exec openclaw-gateway /app/openclaw.mjs skills install agent-browser-clawdbot add npm install -g agent-browser to Dockerfile (optionally install Linux packages and download Chrome) docker compose exec openclaw-gateway /app/openclaw.mjs skills install imap-smtp-email .openclaw/workspace/skills/imap-smtp-email/.env.
Run below if agent-browser is not installed system-wide:
volumes:
- /home/openclaw/.openclaw:/home/node/.openclaw
- /home/openclaw/.openclaw/.agent-browser:/home/node/.agent-browser
- /home/openclaw/.openclaw/.cache:/home/node/.cache~/.openclaw$ ./bin/agent-browser install
~/.openclaw$ ./bin/agent-browser open google.com
.openclaw/openclaw.json:
"browser": {
"cdpUrl": "http://127.0.0.1:3003"
}
.openclaw/openclaw.json (check also TTS documentation, audio formats):
"messages": {
"tts": {
"auto": "inbound",
"provider": "microsoft",
"providers": {
"microsoft": {
"enabled": true,
"type": "microsoft",
"voice": "en-GB-ChristopherMultilingualNeural", // alternatively use "en-GB-AndrewMultilingualNeural"
"rate": "+10%",
"outputFormat": "audio-24khz-48kbitrate-mono-mp3"
},
"openai": {
"enabled": true,
"apiKey": "${OPENAI_API_KEY}",
"model": "gpt-4o-mini-tts",
"voice": "onyx", // change to "alloy" for female voice
"rate": "1.1"
}
}
}
}
The format ogg-24khz-16bit-mono-opus should be supported by Microsoft, but in fact audio-24khz-48kbitrate-mono-mp3 had worked.
Dockerfile: USER root RUN pip install piper-tts --break-system-packages USER node RUN mkdir /home/node/.piper # Check more voices here: https://huggingface.co/rhasspy/piper-voices/tree/main/en/en_GB RUN python3 -m piper.download_voices --data-dir /home/node/.piper en_GB-northern_english_male-medium
workspace/skills/piper-tts/SKILL.md
# Piper TTS Skill
Local offline text-to-speech using Piper neural TTS. No API keys, no network calls — fully on-device synthesis.
## Quick Start
```bash
# Basic invocation — note the model must be a full path to an .onnx file
piper \
-m /home/node/.piper/en_GB-northern_english_male-medium.onnx \
-f /tmp/output.wav \
-- "Your text here"
```
**Example to console (raw PCM):**
```bash
echo "Hello" | piper -m model.onnx --output-raw > audio.raw
```
## CLI Parameters
All flags accept both GNU-style (`--flag`) and underscore (`--flag_name`) forms.
| Short | Long | Type | Default | Description |
|-------|----------------------|--------------------|-------------|--------------------------------------------------|
| `-h` | `--help` | — | — | Show help and exit |
| `-m` | `--model` | path | required | Path to the `.onnx` model file |
| `-c` | `--config` | path | — | Path to model config JSON (optional) |
| `-i` | `--input-file` | path(s) | — | Read text from file(s) instead of stdin/arg |
| `-f` | `--output-file` | path | stdout | Write WAV to file (default: raw to stdout) |
| `-d` | `--output-dir` | path | cwd | Write WAV file(s) to directory |
| — | `--output-dir-naming`| `{timestamp,text}` | `timestamp` | Filename scheme for output directory |
| — | `--output-raw` | — | false | Stream raw PCM to stdout instead of WAV header |
| `-s` | `--speaker` | int | 0 | Speaker ID (multi-speaker models only) |
| — | `--length-scale` | float | 1.0 | Phoneme duration multiplier (higher = slower) |
| — | `--noise-scale` | float | 0.667 | Generator noise |
| — | `--noise-w-scale` | float | 0.8 | Phoneme width noise scale |
| — | `--cuda` | — | false | Use GPU acceleration (requires onnxruntime-gpu) |
| — | `--sentence-silence` | float | 0.0 | Seconds of silence after each sentence |
| — | `--volume` | float | 1.0 | Volume multiplier (1.0 = normal) |
| — | `--no-normalize` | — | false | Skip automatic volume normalisation |
| — | `--data-dir` | path | `.` | Directory that contains voice models |
| — | `--debug` | — | false | Print debug information to stderr |
Text passed directly on the command line must be placed after `--` so it's not parsed as a flag.
## Installing Piper
```bash
pip install piper-tts
```
The `piper` binary is installed to `/usr/local/bin/piper` by default. Verify:
```bash
which piper
```
Voices are `.onnx` files downloaded separately. The Quick Start example uses the model already present at `/home/node/.piper/en_GB-northern_english_male-medium.onnx`.
## Post-Processing: WAV → OGG
Telegram voice messages and many chat apps prefer OGG/Opus format for smaller size and better quality. This is strongly recommended — WAV works fine, but OGG reduces file size 4–5× with no audible quality loss for speech.
Convert a Piper-generated WAV file to OGG/Opus:
```bash
ffmpeg -i /tmp/input.wav -c:a libopus -b:a 32k -vbr on /tmp/output.ogg
```
**Common presets:**
- `-b:a 32k` — good quality voice (recommended for speech)
- `-b:a 64k` — high quality music
- `-vbr on` — variable bitrate (smaller files)
## One-Pass WAV → OGG (optimized)
Instead of writing an intermediate WAV file, pipe Piper's output directly to ffmpeg for on-the-fly conversion:
```bash
echo "Your text here" | \
piper \
-m /home/node/.piper/en_GB-northern_english_male-medium.onnx \
--output-raw | \
ffmpeg \
-f s16le -ar 22050 -ac 1 \
-i pipe:0 \
-c:a libopus -b:a 32k -vbr on \
-f ogg /tmp/output.ogg
```
This avoids writing a temporary WAV file to disk and reduces I/O overhead.
**Explanation:**
- `--output-raw` — Piper emits raw PCM (no WAV header) to stdout
- `-f s16le` — 16-bit signed little-endian PCM format
- `-ar 22050` — sample rate matches Piper's output
- `-ac 1` — mono channel
- `-i pipe:0` — read input from stdin
- `-f ogg` — force OGG container output
## Notes
To speedup processing check for solution when Piper is running as [Web server](https://github.com/OHF-Voice/piper1-gpl/blob/main/docs/API_HTTP.md).
openai-whisper skill is already included in vanilla OpenClaw image.Dockerfile: USER root RUN pip install openai-whisper --break-system-packages USER node RUN whisper --model tiny dummy.mp3 RUN whisper --model medium dummy.mp3 USER root # Replace the script with a wrapper: RUN mv /usr/local/bin/whisper /usr/local/bin/whisper.py COPY --chown=root:root whisper /usr/local/bin/whisper # Forbid downloading new models (stick to "medium"): RUN chmod a-w -R /home/node/.cache/whisper
which will pre-download the model and replace the script with a wrapper.
#!/bin/bash # # Wrapper script that overrides model and language (for faster processing on CPU). # Note that this wrapper script does not actually execute /usr/local/bin/whisper.py, but executes whisper/transcribe.py as __main__. # exec python3 -m whisper.transcribe "$@" --model medium --language en --fp16 False
docker compose run --rm openclaw-cli onboard or like below.docker compose exec openclaw-gateway /app/openclaw.mjs config1)# docker compose exec openclaw-gateway /app/openclaw.mjs models list Model Input Ctx Local Auth Tags openrouter/stepfun/step-3.5-flash:free text 250k no yes default,configured,alias:OpenRouter openrouter/minimax/minimax-m2.5:free text 192k no yes fallback#1,configured,alias:text-reasoning-minimax ollama/qwen2.5:7b text 32k yes yes configured,alias:text-reasoning-qwen openrouter/anthropic/claude-sonnet-4.6 text+image 977k no yes configured,alias:text-reasoning-sonnet openrouter/google/gemini-3-flash-preview text+image 1024k no yes configured,alias:text-reasoning-gemini openrouter/nvidia/nemotron-3-nano-30b-a... text 250k no yes configured,alias:text-reasoning-nemotron openrouter/openai/gpt-oss-120b:free text 128k no yes configured,alias:text-reasoning-gpt openrouter/qwen/qwen3-next-80b-a3b-inst... text 256k no yes configured,alias:text-reasoning-instruct openrouter/qwen/qwen3-coder:free text 256k no yes configured,alias:coder openai/gpt-5.1-codex text+image 391k no yes configured,alias:GPT openrouter/nvidia/nemotron-nano-12b-v2-... text+image 125k no yes image,configured,alias:image-reasoning
OpenClaw: access not configured. Your Telegram user id: 20538162 Pairing code: Z4RQ92MX
you need to run
# docker compose exec openclaw-gateway /app/openclaw.mjs pairing approve telegram Z4RQ92MX Approved telegram sender 20538162.
| Task / LLM name | openrouter/stepfun/step-3.5-flash:free | openrouter/openai/gpt-oss-120b:free | ollama/qwen2.5:7b | openrouter/google/gemini-3-flash-preview | nvidia/nemotron-3-nano-30b-a3b:free | qwen/qwen3-next-80b-a3b-instruct:free | ollama/minimax-m2.7:cloud |
|---|---|---|---|---|---|---|---|
| Free / price | | | | $0.50/M input tokens + $3/M output tokens | | | |
Remembers to use “agent-browser” skill as instructed in TOOLS.md | | | | | | |
|
| Tries to complete the task (does not stop in the middle) | | | | | | |
|
| “Send to Telegram the summary of OS, hardware of the system you're running on plus current model detailed configuration and usage.” | | | | | | has even added emoji to message 💪 |
|
| Is able to complete the task about mini PC news tracking2) | | lost in iterations | but executed only one search query instead of five | stopped after 1st iteration and could not resume | | |
|
| Is able to complete the task about ParagraphAlignment start3) | | |
| | | |
workspace/ryzen-ai/news.db using URL as a key.openclaw@localhost with subject “New Ryzen AI mini PCs” using SMTP server on port 25
Find the fuel station with cheapest Euro 95 fuel in Delft and 15 km around. Generate route as image from Hanos (Delft) to that station. Send route as image plus address of the station and price as audio message to Telegram.