| Name | Price | CPU | GPU | Memory | Storage | Display ports | Extension slots | Connectivity | Wireless | Dimensions | Max TFLOPS | Tokens/sec (120B Q4) | Tokens / € | Tokens / W | Clustering |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Framework Desktop | €3591 | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 1TB, 2× NVMe PCIe 4.0 ×4 M.2 2280, max 8TB each | 1× HDMI v2.1, 2× DisplayPort v1.4 (8K@60Hz) | 1× PCIe 4.0 ×4 slot (max 50GbE) | RJ45 5Gbit (Realtek RTL8126), 2× USB4-C, 2× USB-A 3.2 Gen 2 | WiFi 7 (AMD RZ717) | 123.7×123.1×54.6mm | ~50–60 FP16 | 12–15 | 0.0055–0.0075 | 0.10–0.14 | Either Ethernet (bad), or adding 50GbE QSFP28 PCIe adapter + cables |
| Bosgame M5 AI | €2080 (pre-order $1699) | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 2TB, 2× NVMe PCIe 4.0 ×4 M.2 2280, max 8TB each | 1× HDMI v2.1, 1× DisplayPort v1.4 (8K@60Hz) | RJ45 2.5Gbit, 2× USB4-C, 3× USB-A 3.2 Gen 2, 2× USB-A 2.0, SD card | WiFi 7, Bluetooth 5.2 (in M.2 2230 key-E PCIe 3.0 slot) | | ~50–60 FP16 | 12–15 | 0.0055–0.0075 | 0.10–0.14 | |
|
| GEEKOM A9 Mega AI Mini PC | $3200 / €3500 (kickstarter $1899) | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 2TB, 2× NVMe PCIe 4.0 ×4 M.2 2280, max 8TB each | 2× HDMI v2.1 (8K@60Hz) | 2× RJ45 2.5Gbit, 2× USB4-C (support Display Port 2.1), 2× USB-C, 3× USB-A 3.2 Gen 2, SD card | WiFi 7, Bluetooth 5.4 (in M.2 2230 key-E PCIe 3.0 slot) | 171×171×71mm | ~50–60 FP16 | 12–15 | 0.0055–0.0075 | 0.10–0.14 | |
|
| GMKtec EVO-X2 | €3000 | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 2TB, 2× NVMe PCIe 4.0 ×4 M.2 2280, max 8TB each | 1× HDMI v2.1, 1× DisplayPort v1.4 (8K@60Hz) | RJ45 2.5Gbit, 2× USB4-C, 3× USB-A 3.2 Gen 2, 2× USB-A 2.0, SD card | WiFi 7, Bluetooth 5.4 (in M.2 2230 key-E PCIe 3.0 slot) | 193×185.8×77mm | ~50–60 FP16 | 12–15 | 0.0055–0.0075 | 0.10–0.14 | |
|
| Beelink GTR9 Pro | €3000 | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 2TB, 2× NVMe PCIe 4.0 ×4 M.2 2280, max 8TB each | 1× HDMI v2.1 (8K@60Hz) | 2× RJ45 40Gbit, 3× USB4-C, 2× USB-A 3.2 Gen 2, 2× USB-A 2.0, SD card | WiFi 7 (MT7925), Bluetooth 5.4 (in M.2 2230 key-E PCIe 3.0 slot) | 180×180×90.8mm | ~50–60 FP16 | 12–15 | 0.0055–0.0075 | 0.10–0.14 | Maybe 2×40Gbit = 10MB/s |
|
| FEVM FA-EX9 | €2950 | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 2TB, NVMe PCIe 4.0 ×4 + x1 M.2 2280, max 8TB each | 1× HDMI v2.1, 1× DisplayPort v1.4 (8K@60Hz) | Oculink 64Gb/s | RJ45 2.5Gbit, 2× USB4-C, 3× USB-A 3.2 Gen 2, 2× USB-A 2.0, SD card | WiFi 7 (MT7925), Bluetooth 5.3 (in M.2 2230 key-E PCIe 3.0 slot) | 192×190×55mm | ~50–60 FP16 | 12–15 | 0.0055–0.0075 | 0.10–0.14 | Oculink |
| MS-S1 MAX | €3120 | AMD Ryzen™ AI Max+ 395, 3.0GHz, 16 cores | AMD Radeon™ 8060S, 2.9GHz, 20 compute units | 128GB LPDDR5x, 8000 MT/s | 2TB, NVMe PCIe 4.0 ×4 + x1 M.2 2280, max 8TB each | 1× HDMI v2.1 (8K@60Hz) | 1× PCIe 4.0 ×4 slot | 2× RJ45 10Gbit (Realtek RTL8127), 2× USB4-C (40Gb/s, Display Port 2.0, PD out 15W), 2× USB4-C v2 (80Gb/s, Display Port 2.0, PD out 15W), 3× USB-A 3.2 Gen 2, 2× USB-A 2.0, SD card | WiFi 7 (MT7925), Bluetooth 5.4 (in M.2 2230 key-E PCIe 3.0 slot) | 192×190×55mm | ~50–60 FP16 | 12–15 | 0.0055–0.0075 | 0.10–0.14 | See above using MCX416A-CCAT Mellanox ConnectX-4 2× 50Gbit/s QSFP28 ($146) |
NVidia chipset:
openclaw user: useradd -m -u 987 -s /bin/bash openclaw && mkdir -m 700 /home/openclaw && chown openclaw:openclaw /home/openclawgateway.auth.token in /home/openclaw/.openclaw/openclaw.json matches the one in docker-compose.yml (openclaw-gateway.environment.OPENCLAW_GATEWAY_TOKEN).docker compose up -d openclaw-gateway / docker compose down# docker compose exec openclaw-gateway bash $ /app/openclaw.mjs devices list Direct scope access failed; using local fallback. Pending (1) ┌──────────────────────────────────────┬───────────────────────────────────────────────────┬──────────┬───────────────┬──────────┬────────┐ │ Request │ Device │ Role │ IP │ Age │ Flags │ ├──────────────────────────────────────┼───────────────────────────────────────────────────┼──────────┼───────────────┼──────────┼────────┤ │ 4ec7a9d0-f5f2-4316-9c07-f9e4fa96620a │ f74fdd392ee3bca006e344a86554ef76b473d7e2bd8502fde │ operator │ │ just now │ │ └──────────────────────────────────────┴───────────────────────────────────────────────────┴──────────┴───────────────┴──────────┴────────┘ $ /app/openclaw.mjs devices approve 4ec7a9d0-f5f2-4316-9c07-f9e4fa96620a Direct scope access failed; using local fallback. Approved f74fdd392ee3bca006e344a86554ef76b473d7e2bd8502fde (4ec7a9d0-f5f2-4316-9c07-f9e4fa96620a)
Control UI requires gateway.controlUi.allowedOrigins (set explicit origins), or set gateway.controlUi.dangerouslyAllowHostHeaderOriginFallback=true to use Host-header origin fallback
mode then add the following to .openclaw/openclaw.json:
"gateway": {
"controlUi": {
"allowedOrigins": [
"http://localhost"
]
}
}and force that host in Apache configuration:
RequestHeader set Origin "http://localhost:18789"
ProxyAddHeaders Off
–bind and –port]]) and configure those via .openclaw/openclaw.json
"gateway": {
"port": 18789,
"mode": "local",
"bind": "loopback"
}
Then check logs:
openclaw-gateway-1 | 2026-04-05T17:55:40.225+00:00 [canvas] host mounted at http://127.0.0.1:18789/__openclaw__/canvas/ (root /home/node/.openclaw/canvas) openclaw-gateway-1 | 2026-04-05T17:55:40.296+00:00 [gateway] listening on ws://127.0.0.1:18789, ws://[::1]:18789 (PID 7)
# cat - > Dockerfile FROM alpine/openclaw:2026.4.2 USER root RUN apt-get update -q \ && apt-get install -y -q --no-install-recommends \ python3-bs4 \ python3-dateutil \ python3-lxml \ python3-requests \ python3-yaml \ && rm -rf /var/lib/apt/lists/* /var/cache/apt/archives/* USER openclaw # docker build -t openclaw-custom:2026.4.2 .
then use openclaw-custom:2026.4.2 in docker-compose.yaml.
docker-compose.yml: services: ollama: image: ollama/ollama:latest user: "987:1001" network_mode: host volumes: - /home/openclaw/.ollama:/.ollama
# docker compose up -d ollama # docker compose exec ollama ollama pull llama3.1:8b-instruct-q4_K_M # docker compose exec ollama ollama run llama3.1:8b-instruct-q4_K_M
.openclaw/openclaw.json looks like this:
"models": {
"providers": {
"vllm": {
"baseUrl": "http://127.0.0.1:11434/v1",
"apiKey": "dummy-api-key",
"api": "openai-completions",
"models": [
{
"id": "ollama/llama3.1:8b-instruct-q4_K_M",
"name": "ollama/llama3.1:8b-instruct-q4_K_M",
"reasoning": false,
...
volumes:
- /home/openclaw/.openclaw:/home/node/.openclaw
- /home/openclaw/.openclaw/.agent-browser:/home/node/.agent-browser
- /home/openclaw/.openclaw/.cache:/home/node/.cache~/.openclaw$ ./bin/agent-browser install
~/.openclaw$ ./bin/agent-browser open google.com
.openclaw/openclaw.json: ~/.openclaw$ ./bin/agent-browser open google.com
docker compose run --rm openclaw-cli onboard or like below.docker compose exec openclaw-gateway /app/openclaw.mjs config# docker compose exec openclaw-gateway /app/openclaw.mjs models list Model Input Ctx Local Auth Tags openrouter/stepfun/step-3.5-flash:free text 250k no yes default,configured,alias:OpenRouter openai/gpt-5.1-codex text+image 391k no yes fallback#1,configured,alias:GPT
OpenClaw: access not configured. Your Telegram user id: 30528162 Pairing code: Z5RQ92MX
you need to run
# docker compose exec openclaw-gateway /app/openclaw.mjs pairing approve telegram Z5RQ92MX Approved telegram sender 30528162.