
Localized Agentic-AI Workstations
Local, agentic AI without the cloud tax. Curated components and the Brane SDK give you faster time-to-train at lower power—ready for lab, edge, and production.
AI on your desk — fast, quiet, and power-aware
We wanted the perfect deskside machine for LLMs, SLMs, and modern neural networks: best-in-class performance with power kept in check and a small, quiet footprint. We carefully selected every component, tuned for throughput per watt, fully set up and tested—train day one. No “server tax,” predictable costs, and edge/business-ready deployment.

Desk-first
Small, quiet chassis you can sit next to—edge/office-ready without datacenter overhead.
Performance per watt
Right-sized accelerators, efficient power delivery, tuned airflow—more results per joule.
No “server tax”
Own the hardware. On-prem control, predictable TCO, no egress or per-seat GPU fees.
Built for flexibility. Choose best-in-class accelerators—AMD Instinct GPUs (or NVIDIA RTX 5000/6000), Kalray DPUs, and AMD/Xilinx FPGAs—then let the AccelOne SDK run them from one toolchain. Configure power, performance, and price to match your workload.
CPU
Management &
General Ops
AMD Instinct
Massive Parallel
AI Acceleration
Kalray DPU
Predictable
Parallel Processing
AMD/Xilinx
AI FPGA
Reconfigurable
Custom Logic
Third-Party
Accelerator
Domain-Specific
Optimization
Memory
Ultra-Fast
Data Access
Network
High-Speed
Interconnect
Flexible configuration — tuned to your power budget
- Maximum Performance: Up to 3× AMD Instinct MI210 (or NVIDIA GPUs) for peak compute and the shortest time-to-train.
- Best Perf/Watt: 1× MI210 + 1×/2× Kalray DPU intelligently offloads streaming and data movement—maintain throughput while cutting watts.
- Custom Acceleration: 1× MI210 + 1× AMD/Xilinx AI FPGA (+ optional Kalray DPU) for deterministic pipelines, hardware IP, and ultra-low latency.
Outcome: build the desk-friendly rig that meets your SLA—speed where it matters, power where it counts, fully validated and ready to train day one.
CPU (Orchestrator)
Central coordination running the AccelOne SDK to route tasks to the optimal accelerator.
GPU (AI Accelerator)
Massive parallel processing for AI training and inference workloads.
DPU (Data Processor)
Specialized data movement and streaming with ultra-low latency.
FPGA (Custom Logic)
Reconfigurable hardware for domain-specific algorithms.
ASIC (Specialized)
Purpose-built circuits for maximum performance and efficiency.
Model Development
Deployment & Ops
Edge & Control
Markets we serve
Performance
Reliability
Control
Featured Workstations
Pick the power profile that fits your desk and workload — each unit is validated, tuned, and ready to train.
Quantum Core — HX-9000
Max Power
High-throughput deskside system designed for intensive AI training and on-premises serving without datacenter complexity. Built for demanding workloads that need maximum computational power.
Key Specifications
- AMD Threadripper (79xx) with DDR5
- Up to 3× AMD Instinct MI210 (1× default)
- Optional accelerators: Kalray TC4, AMD/Xilinx AI FPGA
- Memory: 128 GB DDR5 standard, expandable to 512 GB
Cyber Core — HX-3000
Best Efficiency
Office-ready compute solution with predictable power consumption, perfect for agentic workflows and local inference. Balances performance with energy efficiency for everyday AI tasks.
Key Specifications
- Intel Core i9 (14th Gen) with DDR5
- NVIDIA RTX 4070/4060 (RTX 5000/6000 optional)
- Optional accelerators: Kalray TC4, AMD/Xilinx AI FPGA
- Memory: 64 GB DDR5 standard, expandable to 256 GB
Want to compare more configurations? Browse the complete lineup in our shop.
Visit the ShopFAQ
Quick answers to the most common configuration and deployment questions.
Which model should I choose?
HX-9000 is for maximum compute (multi-accelerator training and heavy R&D). HX-3000 targets the best perf/watt for desks, labs, and edge work—great for on-prem inference and development. Other Brane workstations and custom configurations are available on demand if you need a different power, size, or accelerator mix—just reach out.
Will it work on a standard office circuit?
Yes—both systems are designed for typical deskside power. We’ll confirm the exact draw for your chosen accelerators during configuration.
How loud are these at the desk?
We tune chassis, airflow, and fan curves for quiet deskside operation. That said, as compute increases, heat rises—and so does fan speed. For most day-to-day tasks the system sounds like a standard workstation; at sustained full capacity it will get noticeably louder to cool the hardware. If you have strict acoustic needs, we can help with profiles, placement, or component choices to keep noise in check.
What OS and frameworks are included?
We support Linux (Ubuntu LTS by default) and Windows 11 Pro. The Brane SDK comes pre-installed with containerized toolchains, supporting AMD Instinct (ROCm) by default and NVIDIA CUDA stacks (option). Note: the Kalray TC4 accelerator does not offer Windows drivers; TC4 configurations require Linux. Other distros are available on request.
Can I mix accelerators or upgrade later?
Yes. HX-9000 supports adding accelerators (e.g., additional MI210s, Kalray TC4, AMD/Xilinx FPGA). HX-3000 supports one additional accelerator option. Memory can be configured at purchase and upgraded later. We also support custom accelerators on request—talk to us about your PCIe cards or specialized hardware.
What’s included before shipping?
Assembly, burn-in, stability checks, and a final validation pass with the Brane SDK so you can train day one. Lead times and warranty terms are shared at order confirmation.
Need help sizing the right workstation?

Tell us your models, datasets, and power envelope—we’ll recommend the best fit for your desk and budget.