Every software project has a moment of truth—that transition from "it works on my machine" to "it must survive a reboot, network outages, and the occasional stray process kill." For the Countryside Forge project, that moment came fast and taught us lessons worth sharing.
What began as a rapid proof-of-concept for Pastor Marcelo Leite at the Countryside Christian Church in Clearwater evolved into a real-world journey through production challenges, deployment architectures, and the hard-wrought lessons that separate working code from reliable infrastructure. Here's how we moved from prototype to production—and the obstacles we encountered along the way.
What We Built
Forge consists of two complementary tools designed to streamline operations at the church:
- The Impact Map: A mobile-first platform where students log "Kingdom Acts" across Pinellas County. The data feeds live campus displays, creating a visible record of collective ministry impact.
- The Next Gen Catalyst Toolkit: A "Mission Control" dashboard that automates administrative busywork for pastoral teams, freeing staff to focus on deepening discipleship engagement.
Both tools are deployed at https://redbeardtampa.com/demos/next-gen-catalyst/forge-pro.html and protected by the access code redbeard2026.
The Production Challenges
With the prototype validated, we faced the uncomfortable truth: getting code to run reliably in production requires solving an entirely different set of problems than getting code to work correctly. Here's what stood in our way.
Challenge #1: Process Persistence
Our initial backend process was started with nohup, a simple approach that served us well during development. When terminal sessions closed, however, the process died with them. The prototype worked beautifully—until it didn't.
Seeing this limitation, we migrated to pm2 for process management, expecting a cleaner path to persistence. The rapid pivot revealed the core challenge of production environments: solutions often precede yet more problems.
Challenge #2: Path Resolution
The pm2 migration surfaced a subtle but critical issue—relative paths that worked in interactive shell environments broke entirely under pm2's operational context. The error message was telling:
$ pm2 start forge_proxy.js
[PM2] Process crashed: Error: spawn python3 ENOENT
>The path resolution that worked when we logged in simply didn't apply to daemon processes. System environments and user environments are not always the same, and code that doesn't account for this difference will fail in production even when it passes every test.
We resolved this by moving to native systemd service definitions with explicit path configuration, providing complete control over the execution environment:
[Unit]
Description=Countryside Forge Proxy Service
After=network.target
[Service]
Type=simple
User=agentnode
WorkingDirectory=/home/agentnode/forge
Environment=PATH=/usr/bin:/usr/local/bin
ExecStart=/usr/bin/node forge_proxy.js
Restart=on-failure
[Install]
WantedBy=multi-user.target
Challenge #3: DNS Resolution in Background Services
Surprisingly, the final production push exposed an issue unrelated to our code or service configuration. The host VM's systemd-resolved DNS service conflicted with the cloudflared binary when run as a non-interactive background service. DNS lookups failed consistently, and Cloudflare tunnels broke with error 1033—intermittent connectivity losses that appeared to be network problems but were actually DNS configuration conflicts.
Resolving this meant carefully configuring the environment in which cloudflared operated, decomposing what seemed like a single connectivity problem into separate path and DNS subproblems.
The Result
Forge now survives reboots and network interruptions reliably. The proxy architecture isolates Forge services on port 3001 behind an access-coded endpoint, providing protection while maintaining public accessibility where needed.
The tools we build at Redbeard Solutions don't just work—they persist. They don't just demonstrate possibility—they deliver reliability. The Countryside Forge project exemplifies our philosophy: rapid prototyping constrained by rigorous production standards. Because code that dies when you close your terminal isn't infrastructure—it's a science experiment.