Introducing the Torrust Tracker Deployer

We're excited to announce a new tool we've been working on: the Torrust Tracker Deployer. This tool aims to simplify the deployment process of the Torrust Tracker to virtual machines, making it as easy as running a single command.

Jose Celano - 22/12/2025
Introducing the Torrust Tracker Deployer

Hello, Torrust community!

We're excited to share some news about a new tool we've been developing: the Torrust Tracker Deployer. While it's not quite finished yet (we're close!), we wanted to give you an early preview of what we're building and why.

Work in Progress: This project is nearing completion but not yet ready for production use. Repository: github.com/torrust/torrust-tracker-deployer

The Challenge

A while back, we published a comprehensive tutorial on deploying Torrust to production. The tutorial covered all the necessary steps to get the Torrust Tracker running on a virtual machine with proper security, SSL certificates, and production-ready configurations.

While the tutorial is thorough and the process isn't particularly complex, it does involve many manual steps. You need to configure Docker, set up Nginx, obtain SSL certificates from Let's Encrypt, configure the tracker itself, and ensure everything is properly networked. It's the kind of task that becomes tedious when you need to do it multiple times or want to quickly spin up a new instance for testing.

Our Solution: The Torrust Tracker Deployer

The Torrust Tracker Deployer is our answer to this problem. The goal is simple: make deploying the Torrust Tracker as easy as running a single command.

Instead of following a lengthy manual process, you'll be able to deploy a fully configured, production-ready Torrust Tracker to your virtual machine with minimal interaction. The tool handles all the heavy lifting—from system dependencies to SSL certificates to tracker configuration.

Design Philosophy

When we started this project, we made a conscious decision about scope and focus. Rather than creating an incredibly flexible deployment tool that can handle every possible configuration and deployment scenario, we opted for simplicity.

The deployer comes with a pre-configured setup that we believe covers 99% of use cases. This means:

  • Sensible default configurations that work out of the box
  • Minimal user input required during deployment
  • Focus on getting a working tracker deployed quickly
  • Optimized for the most common deployment scenario: a single virtual machine

If you need a highly customized deployment or want to configure every aspect of your setup, the manual deployment tutorial is still available. But for those who want to get started quickly with a solid, production-ready configuration, the deployer is the way to go.

Who Should Use the Deployer?

The deployer is designed for a specific use case and audience. Here's how to decide if it's right for you:

Ideal For

  • Docker-based deployments: The deployer uses Docker for all services, making updates and version management straightforward
  • VM deployments: You want to run your tracker on a virtual machine (cloud or local)
  • Integrated monitoring: You want Prometheus and Grafana set up alongside your tracker
  • Quick setup: You have average infrastructure knowledge and want to speed up the initial installation process
  • Standard configurations: You're comfortable with sensible defaults and don't need extensive customization upfront
  • Learning and documentation: You want to understand tracker infrastructure requirements and see how to integrate with third-party tools
  • Bare-metal performance optimization: If you're deploying on bare metal and need to tune every aspect for maximum performance, the Docker overhead (though minimal) might not be acceptable
  • Non-Docker deployments: If you don't want to use Docker, this tool isn't for you
  • Complete beginners: The deployer requires average infrastructure knowledge—it's not designed for users with no experience managing servers
  • Highly specialized setups: If you need very specific, non-standard infrastructure configurations, the manual approach gives you more control

Bonus Value: Living Documentation

Beyond deployment, the tool serves another purpose: living, working documentation of the tracker's infrastructure requirements. By examining the deployer's configuration and generated files, you can:

  • Understand what the tracker needs to run properly
  • See how to integrate with Prometheus, Grafana, and MySQL
  • Learn Docker Compose configurations that work
  • Use it as a starting point, then customize to your specific needs

Think of it as a reference implementation that you can learn from and adapt.

Docker Choice: We chose Docker because it makes updating the tracker and its dependencies much easier. Yes, there's a small performance overhead, but the operational simplicity and update safety are worth the trade-off for most deployments.

Scope and Limitations: A Deployment Tool, Not a Management Platform

It's important to understand what this tool is—and what it isn't. The Torrust Tracker Deployer is a single-use deployment tool, not an ongoing infrastructure management platform.

What It Does

The deployer's job is to get your Torrust Tracker up and running quickly and correctly. It handles:

  • Initial infrastructure provisioning
  • Service installation and configuration
  • Getting everything running and verified
  • Setting up the initial monitoring and database stack

Think of it as an expert installation assistant—it does the heavy lifting of getting your tracker deployed, but then hands over control to you.

What It Doesn't Do

Once your tracker is deployed, ongoing infrastructure management becomes your responsibility as a system administrator. The deployer does not handle:

  • Checking and managing backups
  • Ongoing monitoring and alerting
  • Operating system updates and patches
  • Security audits and hardening
  • Performance tuning and optimization
  • Troubleshooting production issues
  • Scaling infrastructure as needs grow
System Administration Required: This is a deployment tool, not a managed service. You'll need basic system administration knowledge to maintain your tracker after deployment. Make sure you have backups, monitoring, and maintenance procedures in place.

This focused approach keeps the tool simple and reliable. Rather than trying to be a comprehensive infrastructure management platform (which would add enormous complexity), we concentrate on doing one thing well: getting your tracker deployed correctly.

Architecture: DDD for Infrastructure

One aspect that makes this project unique is that it's not a conventional infrastructure tool. We've applied Domain-Driven Design (DDD) principles to infrastructure automation—an approach more commonly seen in business applications than DevOps tooling.

Domain-First Thinking

In our architecture, the domain represents the core concepts we care about:

  • Services and their configurations
  • Dependencies between components
  • Deployment constraints and requirements
  • Infrastructure states and transitions
  • Business rules for deployments

These domain concepts live in the heart of the application, independent of any specific technology.

Tools as Infrastructure Layer

Tools like OpenTofu and Ansible are implementation details that live in the DDD infrastructure layer. This means:

  • The domain doesn't know or care about OpenTofu or Ansible
  • We could swap these tools for alternatives without changing domain logic
  • The core business rules remain stable even as tooling evolves
  • Testing is easier because domain logic is isolated from external dependencies

This separation creates a more maintainable codebase where deployment logic is expressed in terms of domain concepts (what we're deploying and why) rather than tool-specific commands (how to use OpenTofu or Ansible).

Why This Matters: By treating infrastructure tools as interchangeable implementation details, we gain flexibility and longevity. The ecosystem of infrastructure tools is constantly evolving, but our domain concepts remain stable. This architecture allows us to adopt new tools or techniques without rewriting the entire system.

The Development Journey: Three Iterations

Getting to the current Rust implementation wasn't a straight path. We took a deliberate, iterative approach, building two proof of concepts (PoCs) before settling on the final architecture. Each iteration taught us valuable lessons and helped us validate different approaches.

1. Bash/OpenTofu/cloud-init PoC

Repository: torrust-tracker-deploy-bash-poc

Technologies: Bash scripts, OpenTofu, cloud-init, Docker Compose
Focus: Infrastructure as Code with libvirt/KVM and cloud deployment
Status: ✅ Historical reference - Completed its research goals

This was our first exploration into automated deployment. We wanted to understand the basics of infrastructure provisioning and validate that we could automate the deployment workflow. The simplicity of Bash made it easy to prototype quickly, and OpenTofu gave us a taste of declarative infrastructure management.

2. Perl/Ansible PoC

Repository: torrust-tracker-deploy-perl-poc

Technologies: Perl, OpenTofu, Ansible, libvirt/KVM, cloud-init, Docker Compose
Focus: Declarative configuration management with mature automation tools
Status: ✅ Historical reference - Completed its research goals

The second iteration introduced Ansible for configuration management, which gave us better structure and reusability. We explored how established DevOps tools handle complex deployments and learned about the trade-offs between declarative and imperative approaches. Perl served as the orchestration layer, coordinating between different tools.

3. Torrust Tracker Deployer (Production)

Repository: torrust-tracker-deployer

Technologies: Rust, OpenTofu, Ansible, LXD, cloud-init, Docker Compose
Focus: Type-safe, performance-oriented deployment tooling
Status: 🚀 Production-ready - Active development

Armed with insights from both PoCs, we built the production version in Rust. The choice of Rust wasn't arbitrary—it aligns with the Torrust ecosystem (the tracker itself is written in Rust) and provides the type safety, performance, and reliability we need for a production deployment tool. We kept the best parts of our previous iterations (OpenTofu for infrastructure, Ansible for configuration, Docker Compose for service orchestration) while wrapping everything in a robust, well-tested Rust application.

This iterative approach allowed us to learn, experiment, and validate our ideas before committing to the final implementation. While it took more time upfront, it resulted in a much better end product.

AI-Assisted Development: A First for Torrust

Beyond the architectural innovation of applying DDD to infrastructure, this project represents another first for the Torrust organization: it's the first project entirely developed using AI agents.

A Different Approach

Up until now, all code in the Torrust Tracker has been human-crafted. We've used traditional development practices with careful, manual code writing. For the deployer, we took the opposite approach—100% of the code lines have been generated by AI agents.

But let's be clear: this doesn't mean we were just "vibe coding" or blindly accepting whatever the AI produced. Far from it.

Disciplined AI Development

Using AI agents effectively requires discipline and infrastructure:

  • Careful code review: Every line generated by AI agents was reviewed by humans. We examined the code for correctness, style, and alignment with our architecture.
  • Extensive documentation: We maintained comprehensive documentation that proved invaluable when working with agents. Clear specifications and architectural decisions help AI agents generate better code.
  • Testing-first mindset: We put special effort into building a robust E2E test suite. These tests give us confidence that agent-generated changes work correctly and don't break existing functionality.

The Testing Challenge

Building a reliable test suite for this project was particularly challenging:

  • Docker inside VMs: Testing deployment means running Docker containers inside virtual machines
  • Docker in Docker: Some scenarios require nested Docker environments
  • CI/CD compatibility: We spent considerable effort finding virtualization tools that work reliably on GitHub runners, allowing us to run comprehensive tests on every push and pull request

These testing challenges were especially difficult to solve, but they were crucial. Without a solid test suite, working with AI agents would have been much riskier.

AI as a Tool, Not a Replacement: Our experience shows that AI agents can be incredibly productive when combined with strong engineering practices. The key is treating AI as a powerful tool that still requires human oversight, clear specifications, and comprehensive testing.

Unexpected Complexity

We'll be honest: this project turned out to be more complex than we initially anticipated. What started as a straightforward automation tool has grown into a substantial codebase.

To give you an idea of the scope, here's a comparison of the deployer codebase versus the tracker itself:

Torrust Tracker Deployer

text
     928 text files.
     804 unique files.                                          
     126 files ignored.

-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
XML                              1          28461              6         131175
Rust                           523          11407          28515          54315
Markdown                       200          16861              6          41777
Text                             2             36              0           8689
JSON                            26              0              0           6828
YAML                            34            298            444           1400
Bourne Shell                     5             74             82            251
HCL                              4             39             46            204
TOML                             6             36             12            175
Dockerfile                       3             35             88             97
-------------------------------------------------------------------------------
SUM:                           804          57247          29199         244911
-------------------------------------------------------------------------------

Torrust Tracker

text
     593 text files.
     541 unique files.                                          
      63 files ignored.

-------------------------------------------------------------------------------
Language                     files          blank        comment           code
-------------------------------------------------------------------------------
Rust                           433          10170           9754          41429
Markdown                        37            670              0           1660
TOML                            33            117             20            990
SVG                              2             34              0            948
YAML                            10            129             14            674
JSON                             8              0              0            563
Bourne Shell                     9             38             15            104
Containerfile                    1             31             15             99
C                                1             21              2             86
SQL                              6              7              0             60
make                             1              6              0             11
-------------------------------------------------------------------------------
SUM:                           541          11223           9820          46624
-------------------------------------------------------------------------------

Yes, you read that right—the deployer codebase has actually surpassed the tracker itself in terms of lines of code! This might seem counterintuitive at first, but it reflects the complexity involved in automating system configuration, handling various edge cases, managing infrastructure as code, and ensuring reliable deployments across different environments.

Note: Much of the deployer's size comes from comprehensive testing, extensive documentation, infrastructure definitions, and automation scripts. It's not just about deploying a tracker—it's about doing it reliably, safely, and repeatably.

Current Status and Timeline

The Torrust Tracker Deployer is currently under active development and nearing completion. We've made significant progress, with the core functionality already in place.

What's Already Working ✅

The foundation is solid. Here's what we've completed:

  • Main application scaffolding (89% complete) - Console commands, logging, and presentation layer
  • Infrastructure provider support - Both LXD and Hetzner Cloud providers are fully implemented
  • Complete deployment workflow - You can deploy the full Torrust Tracker stack including:
    • Torrust Tracker (HTTP and UDP)
    • MySQL database
    • Prometheus for metrics collection
    • Grafana for visualization
  • Core commands:
    • create - Create deployment environment definition
    • provision - Create infrastructure resources (VMs)
    • configure - Install dependencies and configure services
    • release - Deploy application releases
    • run - Start and run deployed services
    • test - Verify deployment
    • destroy - Clean up resources

Deployment Options

The deployer currently supports multiple deployment scenarios:

Hetzner Cloud (Production)

Status: ✅ Fully working and production-ready
Our primary target is Hetzner Cloud, a reliable European cloud provider. This is the recommended option for production deployments. For current pricing information, visit hetzner.com/cloud.

LXD (Local/Testing)

Status: ✅ Fully working, but intended for testing
The LXD provider allows you to deploy locally on your development machine or in CI environments. While it works perfectly, it's primarily intended for:

  • Running E2E tests during development
  • Quick local demos and experimentation
  • Learning how the deployer works without cloud costs

It's not recommended for production use, but it's great for getting started quickly if you have the dependencies installed.

Pre-Provisioned VMs

Status: ✅ Supported
Already have a VM from another provider? No problem. You can register pre-provisioned instances with the deployer and use it for everything except the initial VM creation (configuration, release, run, etc.). This is perfect if:

  • You already have a VM you want to use
  • Your hosting provider isn't supported yet
  • You prefer to manage infrastructure provisioning yourself

Future Providers

Adding new cloud providers is straightforward—we use OpenTofu, and it only requires adding two templates for each new provider. If there's demand for specific providers (AWS, DigitalOcean, Linode, etc.), they can be added relatively easily.

What's Remaining 🚧

We're in the final stretch with just a few remaining items:

  • Docker image for the deployer - To make installation even simpler
  • Additional console commands:
    • show - Display deployment status
    • list - List all deployments
  • HTTPS support - SSL/TLS certificates for:
    • HTTP tracker
    • Tracker API
    • Grafana dashboard
  • Backup and recovery - Database backups and disaster recovery procedures
  • Verbosity levels - Enhanced output control for debugging

We expect to have a stable, production-ready release with these features completed in approximately one month (around late February 2025). The repository is public, and you're welcome to follow along with development or even try it out, but please be aware that it's not yet finished and some features are still being polished.

You can track detailed progress on our official roadmap.

Almost There, But Not Yet Ready: While we're close to completion, the deployer is not yet ready for production use. APIs, configurations, and deployment processes may still change before the stable release.

Comprehensive Documentation

One aspect we're particularly proud of is the quality and breadth of documentation. We've invested heavily in making the deployer easy to understand, use, and contribute to.

Internal Development Documentation

For developers and contributors, we maintain extensive internal documentation:

  • Architectural Decision Records (ADRs): Every significant decision has been documented, explaining not just what we did but why we chose that approach
  • Technical guides: Deep dives into the architecture, domain model, and implementation details
  • Manual testing guides: Step-by-step procedures for verifying functionality when automated tests aren't sufficient
  • Contributing guidelines: Clear instructions on how to get started, code standards, and the development workflow

User-Facing Documentation

For end users, we provide clear, practical documentation that helps you get started quickly and understand how to use the deployer effectively.

Self-Documenting Program Design

Following Rust philosophy, the program itself is designed to be helpful and guide you through the deployment process:

  • Clear guidance: The tool tells you what to do next at each step of the deployment
  • Actionable error messages: When something goes wrong, the deployer doesn't just tell you what failed—it suggests how to fix it
  • Progressive disclosure: Information is presented when you need it, not all at once
  • Helpful defaults: Sensible default values with explanations of what they mean

This attention to user experience means you spend less time reading documentation and more time getting things done.

Documentation as Code: Our documentation lives in the repository alongside the code, versioned and reviewed just like any other contribution. This ensures it stays accurate and up-to-date as the project evolves.

What to Expect

When the deployer is ready, you can expect:

  • Simple Installation: A straightforward installation process that gets you up and running quickly
  • Fast Deployment: Complete deployment in approximately 1-2 minutes
    • LXD (local): Less than 45 seconds, with provisioning taking about 30 seconds
    • Cloud providers: Potentially even faster for provisioning, with configuration/release/run taking less than a minute depending on instance size and bandwidth
  • Automated Configuration: All the tedious setup steps from the manual tutorial handled automatically
  • Production Ready: SSL certificates, proper security configurations, and optimized settings out of the box
  • Easy Updates: Simplified process for updating your tracker to newer versions
  • Comprehensive Documentation: Clear guides on how to use the deployer and customize configurations

Compare this to the manual deployment process, which involves following a lengthy tutorial with many steps and can take several hours to complete. The deployer reduces that to minutes.

Prerequisites

To use the deployer, you'll need:

Common Dependencies

  • OpenTofu - For infrastructure provisioning
  • Ansible - For configuration management

Don't worry if you don't have these installed—the deployer includes a console command to help you install them and check if they're available on your system.

Note: A Docker image is also planned, which would eliminate the need to install dependencies on your machine. However, this feature is not yet implemented.

Provider Requirements

Depending on your chosen provider, you'll need provider-specific credentials:

  • Hetzner Cloud: A Hetzner account, a project, and an API token. The Hetzner documentation explains how to obtain these. You'll provide the API token when creating your deployment environment.
  • LXD: LXD installed and configured on your local machine
  • Pre-provisioned VM: SSH access to your existing VM
Security Note: The deployer runs entirely on your local machine—your API tokens never leave your computer. However, make sure your data folder has proper permissions so nobody can access your credentials. Since this is a single-use tool, you can backup your environment definition after deployment and securely remove everything from your local machine, keeping the backup in a safe place.

Error Handling and Recovery

Given that the entire deployment process takes less than a minute, we've opted for simplicity over complex recovery mechanisms. If something goes wrong, the recommended approach is to destroy the environment and start fresh—it's faster than trying to recover from a failed state.

That said, we've built in comprehensive observability:

  • Excellent logging and tracing: You can see exactly what happened and where things went wrong
  • Human-readable state: All internal state is persisted in JSON format—no cryptic binary formats
  • Build artifacts preserved: Final OpenTofu and Ansible files are saved in a "build" folder for inspection
  • Manual intervention possible: If you want to continue manually, you can run the Ansible playbooks yourself—they're idempotent, so it's safe to run them again
  • Clear failure states: When a command fails, the environment enters a "failed" state with a detailed description of the problem

The state machine prevents continuing with normal commands after a failure, but since the whole process only takes a minute, there's really no need to try to recover—just destroy and redeploy.

Getting Involved

We're always excited to have community involvement in Torrust projects. If you're interested in the deployer:

  • Check out the repository: Visit github.com/torrust/torrust-tracker-deployer to see the code
  • Star the repo: Show your interest and stay updated on releases
  • Follow development: Watch for issues, pull requests, and discussions
  • Provide feedback: Once we release a beta version, we'd love to hear about your deployment experiences
  • Get help: If you need assistance, have questions, or want to share your experience, open an issue or start a discussion in the GitHub repository

Conclusion

The Torrust Tracker Deployer represents our commitment to making the Torrust ecosystem more accessible and easier to use. While the journey has been more complex than anticipated, we're nearly at the finish line and believe the result will be worth the effort.

Deployment shouldn't be a barrier to running your own tracker. With this tool, we're working to remove that barrier and make it possible for anyone to deploy a production-quality BitTorrent tracker in minutes rather than hours.

We're in the final stretch of development, and we'll keep you updated as we approach the stable release. We can't wait to see what you build with it!

Happy tracking!


Have questions or feedback? Open an issue or start a discussion in the deployer repository, or join the broader conversation in the tracker discussions.