Infrastructure & Monitoring Portfolio

Self-hosted cloud production · Local security monitoring · Hybrid remote management

1 · Project Overview: GCP Cloud Production

This environment runs on a Google Cloud Platform e2-micro instance in Iowa. It's a fully self-hosted stack handling VPN access, DNS filtering, password management, reverse proxying, file access, and documentation — all inside Docker containers on a single 1 GB VM.

GCP Service Stack

  • WireGuard VPN: Secure tunnel into the lab over UDP/443.
  • Pi-hole v6: Network-wide DNS sinkhole — ads and trackers blocked at the resolver.
  • Vaultwarden: Self-hosted Bitwarden. Credentials stored on my own server.
  • Nginx Proxy Manager: Reverse proxy with automatic SSL via Let's Encrypt.
  • Filestash: Browser-based remote file access over the VPN.
  • Wiki.js: This documentation site, running on a SQLite backend.

2 · Project Overview: Local WSL2 Monitoring Hub

The second environment lives on my home PC inside WSL2 (Ubuntu on Windows). It focuses on observability, security automation, and monitoring the GCP node remotely via gcloud-ssh. All services run in Docker and are accessible over the local WireGuard tunnel.

WSL Service Stack

  • Grafana & Prometheus: Metrics dashboards and time-series data collection.
  • CrowdSec: Automated threat detection — watches logs and bounces malicious IPs.
  • Uptime Kuma: Live uptime monitoring with alerting for every service.
  • cAdvisor & Node Exporter: Container and hardware telemetry for Prometheus.
  • WG-Easy: Local WireGuard UI for managing home network VPN clients.

3 · Network Architecture Diagram

The diagram below shows the logical separation between both environments and how the WSL2 machine manages the GCP node remotely via gcloud-ssh.

Logical network topology · WSL2 Local Hub (top) → GCP Cloud Node (bottom)

graph TB WSL_CLI[WSL Terminal Ubuntu] subgraph Monitoring_Stack [Project 2 - Local WSL2 Node Home PC] UK[Uptime Kuma Port 3002] GR[Grafana Port 3001] PR[Prometheus Port 9090] CS[CrowdSec] WG_E[WG-Easy Port 51821] CAD[cAdvisor Port 8081] end WSL_CLI --> UK WSL_CLI --> GR WSL_CLI --> PR WSL_CLI --> CS WSL_CLI --> WG_E WSL_CLI --> CAD WG_GCP[WireGuard Gateway] subgraph GCP_Internal [Project 1 - GCP Cloud Node Iowa] VW[Vaultwarden Port 8080] PH[Pi-hole v6 Port 8088] NPM_A[NPM Admin Port 81] FS[Filestash Port 8334] WK[Wiki.js Port 3001] end WG_GCP --> VW WG_GCP --> PH WG_GCP --> NPM_A WG_GCP --> FS WG_GCP --> WK WSL_CLI -.->|gcloud ssh Remote Management| WG_GCP style Monitoring_Stack fill:#0d1117,stroke:#3fb950,stroke-width:2px,color:#fff style GCP_Internal fill:#0d1117,stroke:#58a6ff,stroke-width:2px,color:#fff style WSL_CLI fill:#161b22,stroke:#00ff00,stroke-width:2px,color:#00ff00 style WG_GCP fill:#1f6feb,color:#fff style WG_E fill:#1f6feb,color:#fff

4 · Engineering Challenges

  • Resource Optimization on 1 GB RAM Running six containerized services on a free-tier e2-micro VM requires careful memory management. I implemented a 1 GiB Linux swap file and tuned container restart policies to keep everything stable without exceeding the resource cap.
  • Network Topology & Port Routing Several services share the WireGuard container's network namespace instead of having their own IP. Getting Pi-hole DNS, NPM, and Vaultwarden to coexist without port conflicts required mapping each service to a unique external port and carefully ordering container startup dependencies.
  • Hybrid Remote Management The WSL2 environment manages the GCP node entirely via gcloud-ssh — no exposed SSH port on the public internet. This keeps the attack surface minimal while still giving full terminal access from home.
  • Security Orchestration CrowdSec on the local node monitors aggregated logs from both environments. WireGuard is the only public-facing entry point — all admin UIs are only reachable after connecting to the VPN.