Running an AI Assistant on a $300 Mini PC

The server that runs my entire AI assistant setup is a Beelink SER5. Ryzen 5 5500U, 16GB RAM, 500GB NVMe. It cost about $300 and sits on a shelf. No fans worth hearing. Power draw is negligible.

It runs Ubuntu Server, OpenClaw, a handful of MCP bridge tools, Cloudflare tunnels, and a backup pipeline. It handles email monitoring across multiple accounts, a morning briefing, news digests, SMS monitoring, and various automation tasks — all delivered to Telegram on cron schedules.

Why local instead of cloud

The knee-jerk answer to “where should I run this” in 2026 is a cloud VM. For this use case, that’s wrong.

Always-on, no monthly bill. The box runs 24/7 on my home network. No $5-20/month VPS. No compute charges. The electricity cost is rounding error. Over a year, the $300 purchase price is cheaper than most VPS options.

Full control. I own the filesystem. I can SSH in, poke around, edit configs, restart services. No container abstractions, no platform limitations, no “please contact support to increase your resource allocation.”

Data stays local. The workspace files, email prefetch data, config snapshots, and assistant memory all live on a disk I control. The only things that leave the box are API calls to model providers (Anthropic, OpenAI, Google) and outbound messages to Telegram.

What actually runs on it

The box is mostly idle. That’s the key insight — the AI inference happens in the cloud via API calls. The local server is an orchestrator, not a compute node. It runs:

  • OpenClaw — the AI assistant platform. Manages the agent, cron jobs, tools, workspace files.
  • mcporter — MCP bridge to Microsoft 365. Handles email and calendar access for multiple tenants.
  • gog — Gmail and Google Calendar CLI access.
  • OpenMessage — SMS monitoring via Google Messages.
  • Cloudflare tunnel — Exposes the voice endpoint (voice.trissmanifold.com) and SSH access without opening ports on my router.
  • rclone — Nightly backups to OneDrive.
  • git — Nightly auto-commits of workspace changes.

The daily CPU load is essentially zero except during prefetch script runs and the occasional interactive session. RAM usage sits around 4-5GB. This box could be half the spec and still be fine.

The recovery plan

The obvious risk: if this box dies, everything stops. No email monitoring, no briefings, no automation. It’s a single point of failure sitting on a shelf.

The mitigation is layers of backup:

  • Nightly rclone sync to OneDrive — all workspace files, configs, and scripts
  • Nightly git auto-commit — workspace changes tracked with full history
  • Config snapshot script — on-demand snapshot of all key config files, uploaded to OneDrive as a timestamped archive
  • The Beelink itself is commodity hardware. If it dies, I buy another one, install Ubuntu, restore from backups, and re-authenticate the API integrations. The longest part of recovery is the OAuth flows, not the setup.

I’ve thought about redundancy — a second box, a cloud failover. For now it’s overkill. This isn’t a production service with SLAs. It’s my personal assistant. If it’s down for a few hours while I set up a replacement, the worst case is I check my own email like a normal person.

Would I recommend this setup

For someone who already runs Linux and is comfortable with CLI tools, SSH, and cron — absolutely. The Beelink SER5 specifically is a solid choice: quiet, low power, enough horsepower for orchestration work, and cheap enough to treat as disposable.

The hard part isn’t the hardware. It’s everything that runs on it: the agent configuration, the triage rules, the prefetch scripts, the cron scheduling, the monitoring pipeline. The box just sits at the bottom of the stack and does its job.