Carl Andersson

SUMMARY
DevOps/Site Reliability engineer with 10+ years of experience across infrastructure, networking, and platform engineering. I prefer declarative configuration, battle-tested software, and solving problems properly rather than quickly. Strong background in network architecture, Kubernetes, and Nix-based infrastructure. I've worked across the stack from physical datacenters to cloud platforms, and I build tools when existing ones don't fit.
COMPETENCIES
EXPERIENCE
Evolvit - Infrastructure Consultant
Aug 2025 - Present (on hold)
Building a production Kubernetes environment on VMware for Evolvit. Full platform including ClusterAPI-managed control plane, Cilium networking, OIDC authentication and all bells & whistles. Training staff on Kubernetes, Nix, and Linux administration. Project on hold due to client capacity constraints.
Helicon - Infrastructure Consultant
Sep 2023 - Feb 2025
Stockholm Exergi
Managed HashiCorp-based infrastructure in AWS. Led Terraform refactoring to improve state management and infrastructure reliability. Recommended Kubernetes migration path which client later adopted. Developer tooling delivered with Nix
Kraftringen
Maintained production AKS cluster with full observability stack. Built extensive CNPG monitoring for TimescaleDB workloads in Grafana. Managed Kustomize-based deployments and version upgrades.
Viaplay - Site Reliability Engineer
Sep 2022 - Aug 2023
SRE in the Media Asset Management division managing hybrid cloud infrastructure. Responsible for 10+ PB storage infrastructure, transcoding compute clusters, and on-premise networking. Architected AWS-to-on-premise connectivity for MAM workload distribution. Part of team managing the physical infrastructure layer underlying Kubernetes clusters.
SDNit - Infrastructure Consultant
Nov 2021 - Sep 2022
Consulting for Viaplay. Initiated and deployed Kubernetes for Media Asset Management. Managed both on-premise RKE2 and EKS clusters with supporting services including Keycloak and observability tooling.
Key achievements:
Kubernetes deployment
Deployed Kubernetes on-premise using RKE2 on Ubuntu with Ansible to consolidate with previously deployed AWS infrastructure. Kubernetes replaced a custom SaltStack based "scheduler" that installed systemd units. Collaborated with application developers enabling them to write control-plane software (Use Kubernetes jobs to run heavy processing jobs)
See more at SDNits websiteDialect - Infrastructure Engineer / SRE
Mar 2019 - Nov 2021
Technical responsibility for a 6-rack datacenter, reporting directly to CTO for infrastructure decisions and purchases. Managed 400-500 VMs. (Including MikroTik Cloud Hosted Router instances for customer network isolation)
Key achievements:
- VMware deployment: Deployed vSphere + vCloud + NSX-T + vSAN stack to replace a unreliable Hyper-V S2D cluster. Customers got self-service via vCloud portal
- Datacenter consolidation: Migrated Dialect conglomerate hosting from InterXion to Skövde datacenter. Live-migrated VMs over stretched L2, then physically relocated and remounted servers to reuse hardware
- Network modernisation: Continued L2 removal for remaining edge cases, deployed EVPN on Cumulus Linux (Mellanox + Broadcom hardware) for appropriate L2 stretching, initiated IPv6 rollout
- Backup infrastructure: Deployed Cohesity for VMware and legacy Hyper-V workloads
IT Support & Sysadmin
2014 - 2019
IT support and Windows Server administration serving 50+ SME customers (5-50 users each). Built high-quality support function from scratch together with one colleague, as of 2025 the support team is 15+. Specialized in MikroTik networking including site-to-datacenter tunnel infrastructure. PowerShell automation for cross-customer environment provisioning. This role established the networking foundation that led to my later datacenter and infrastructure work.
Key achievements:
- Network architecture redesign: Eliminated chronic STP looping issues by migrating customer connectivity from L2 tunneling to L3 routing, requiring subnet migrations across the customer base
- O365 crisis response: When parent company went bankrupt, wrote PowerShell + Puppeteer automation to create admin users and accept new CSP partnerships across 1000+ Microsoft 365 tenants
- Internal tooling: Built PBX queue monitoring with notifications, custom TeamViewer auto-registration installer (before native support existed), HTTP API for physical phone PBX group login/logout
- MikroTik and networking instructor for all Dialect branch offices
PROJECTS
nix-csi
Kubernetes CSI driver that mounts the Nix store into pods, enabling container image replacement with Nix packages. Uses hardlink views with shared inodes for memory-efficient page cache sharing across pods. Works on managed Kubernetes without node modifications.
Technical details
Problem: Container images have inefficient layer sharing, fragile build caching, and no intrinsic SBOM. Nix solves these at the package level but needed Kubernetes integration.
How it works:
- DaemonSet deployment—no node modifications required. Works on managed clusters (EKS, GKE, AKS).
- Creates a node-local Nix store at
/var/lib/nix-csi. - For each CSI volume, creates a hardlink view of requested store paths.
- Mount modes: read-only (direct bind mount, shared inodes) or read-write (overlayfs layer).
Key benefit: Hardlinked files share page cache across pods. When one pod loads a shared library, other pods get cached pages. Container layers can't do this—each container gets separate inodes even with the same base image.
Comparison: Similar concept to nix-snapshotter, but doesn't require containerd modifications. Inspired Flox's "imageless Kubernetes" approach.
Written in Python. In production use in hetzkube.
easykubenix
Kubernetes manifest generation using the NixOS module system. Composes cleanly, scales well, and lets you override with mkForce instead of JSON patches. Renders Helm charts into the module system for compatibility.
Technical details
Why: Helm templates are stringly-typed and hard to debug. Kustomize patches are limited—strategic merge works until it doesn't, JSON patches are verbose. The NixOS module system already solves configuration composition well.
Features:
- Deep merging from multiple sources
- Type checking with clear errors
- Override at any level with
mkForce/mkOverride - Conditional configuration with dependency tracking
Helm compatibility: Renders Helm charts and imports results into the module system. Override rendered resources like any other. Covers most use cases, though some hooks/lifecycle features don't translate.
nix-csi integration: Reference Nix packages directly in manifests—nix-csi automatically identifies and pulls required store paths.
In production use managing all hetzkube deployments: cert-manager, Cilium, kube-prometheus-stack, Keycloak, and applications.
hetzkube
LARPing production-grade Kubernetes on Hetzner on a strict budget. Full platform: ClusterAPI control plane, NixOS nodes, Cilium with Gateway API, custom IPAM for IPv4/IPv6 LoadBalancer address reuse, enforced dual-stack via Kyverno, kube-prometheus-stack, and OIDC authentication via Keycloak across all services.
Technical details
Stack:
- Control plane: ClusterAPI managed CX22 node(s).
- Workers: NixOS nodes rebuilt from scratch on every initialization.
- Networking: Cilium for CNI, ingress, Gateway API and network policies. Full dual-stack (IPv4+IPv6) enforced via Kyverno.
- Custom IPAM: Reuses node IP addresses as LoadBalancer addresses to avoid Hetzner floating IP/LB costs. Each node's /64 IPv6 block is carved up for services. Kyverno enforces IP sharing annotations to maximize IPv4 utilization.
- Storage: hetzner-csi for traveling volumes, local-path-provisioner for node-bound volumes.
- Observability: kube-prometheus-stack (Prometheus, Grafana, Alertmanager).
- Auth: Keycloak OIDC for all! (Kube, Grafana, Headlamp, pgAdmin, ...)
Deployed components:
All manifests generated via easykubenix. Application delivery via nix-csi.
</details>
Crossfaction Battlegrounds (World of Warcraft)
First public implementation of crossfaction PvP queuing for World of Warcraft private servers (~2012). Balanced queue times between factions by allowing mixed-faction teams. Implemented multiple queue modes including item-level balancing and FIFO. Still used by private servers today, and the concept was later adopted by Blizzard as an official feature.
Technical details
Problem: WoW factions (Alliance/Horde) had imbalanced PvP populations. On private servers, 80/20 splits meant 30+ minute queues for one faction, instant for the other.
Implementation (C++):
- Reused arena faction field to temporarily assign players to opposite faction during battlegrounds
- Invalidated Player info cache to all BG players
- Faked faction responses in player info cache packets
- Modified scoreboard packets so client UI displayed teams correctly
- Queue modes: simple (first-come-first-served) and item-level balanced (with gear-swap prevention)
Impact: Published publicly—first implementation any server could use. Spread rapidly across the private server community during the 3.3.5a era when faction imbalance was severe. Still in use today.
Blizzard later added crossfaction instances to retail WoW (dungeons/raids first, then battlegrounds). The concept was proven on private servers years earlier.
One of my first real programming projects—modifying a large C++ codebase to solve a problem affecting thousands of players.
Other
- dinix - Render dinit service configurations using Nix
- registry - Nix derivations for all OpenTofu registry providers
- RC-Butiken - C# sync daemon for 20,000+ SKUs from supplier to Shopify. SQLite state tracking
with
daily diffing to handle API rate limits. Built inventory/price change reports for strategic ordering and crawled
Traxxas.com for spare parts categorization.
Technical details
Problem: Swedish RC hobby shop needed automated product catalog management. 20,000+ products, Shopify API rate limits, daily supplier updates.
Architecture:
- SQLite state tracking: store last-synced state, diff against daily supplier data, push only changes. Reduced daily API calls from 20,000+ to a few hundred.
- Selective sync: inventory/prices updated daily, images/descriptions imported once then shop-maintained.
- Daily reports: inventory changes, price changes, new products—enabled strategic ordering decisions.
Traxxas crawler: Scraped Traxxas.com parts catalog to extract car model → category → part relationships. Generated tags/categories for Shopify products. Bootstrapped better spare parts browsing UX.
Ran reliably for years with minimal maintenance.