Skip to content

Moving to hourly CI/CD: A people-first approach to technical transformation

Moving to hourly continuous deployment

Explore this content with AI:

ChatGPT Perplexity Claude Google AI Mode Grok

If you’re growing fast, every stage of growth can test your technical limits. That’s what happened with our deployment process. We had a weekly release model that served us well until a few months ago. But with our pace of product velocity, we needed to shift to a release model that could accommodate our progress better. 

We realized we had to move to hourly continuous deployment to solve this problem. What we didn’t realize though: what shifting to this deployment model would entail. It involved more people and change management than we expected. 

In this article, we’ll share how we moved to an hourly deployment model by taking a people-first approach. 

Why we said goodbye to weekly releases 

Weekly releases historically worked well for Quo during our early scaling period. But at some point in our rapid growth, we began to consistently hit blockers. Our weekly release was holding up internal releases and public launches. 

For example, let’s say the product team identifies a critical customer-facing bug that needs a simple fix: a one-line copy change. But the next release window isn’t until tomorrow. We’d then need to cherry pick into production, then back merge the changes into dev and staging.

Similarly, lets say engineering completes a major feature that’s been in development for months with a release date outside the regular schedule. We’d need to cherry pick all this work into staging to be tested. The QA team would then need to scramble to test everything and release it to production.

This was our reality with weekly releases. Every release felt like a traffic jam, with urgent fixes stuck behind the window, and teams playing a complex game of “what goes out when?” The weekly cadence created artificial boundaries that had little to do with actual business priorities or technical dependencies.

But more fundamentally, it created confusion. Teams constantly asked us: 

  • “What environment has what code?” 
  • “When will my fix actually reach users?” 
  • “Why can’t we just ship this now?”

The weekly release model became a bottleneck that was slowing down our product velocity. It was also creating unnecessary stress across the organization.

Shifting to hourly CD with trunk-based development

Our solution was architecturally straightforward: adopt trunk-based development with hourly continuous deployment. Instead of batching changes for weekly releases, every merged pull request would automatically deploy to production within 60 minutes.

The technical implementation centered around three core principles:

  1. Single source of truth: We eliminated our complex dev → staging → stable branch workflow in favor of a simple main branch. This meant that main is always guaranteed to have the code that’s in production.
  2. Feature flag-driven releases: Rather than coordinating platform-level releases, we shifted ownership to product teams through feature flags. Teams can now deploy code safely behind flags and control when features became visible internally.
  3. Automated quality gates: We implemented comprehensive automated testing at every level. That includes unit tests required in every PR to nightly E2E suite runs against main. 

The human challenge

Here’s what the technical documentation couldn’t capture: as much as this was a technical change, it was also a fundamental shift in how our entire organization thought about shipping software. This meant that some roles would have shifts in ownership, priorities, and oversight.

This came with some concern from the team. After all, change is hard. We were ramping up our release frequency and pushing our QA process away from a single team to the entire organization. These were valid concerns, but we addressed them with enough planning and guidance before the rollout. 

Changing how we do QA 

The biggest effect of moving to an hourly CD model was in our QA process. For years, our QA team had operated on a weekly cycle, thoroughly testing each release bundle before it went live. It was predictable and comprehensive, but was becoming unsustainable as we grew.

The new model required a complete reimagining of quality ownership, where testing responsibility would shift more heavily to developers and teams. Instead of a centralized QA bottleneck, quality became everyone’s responsibility. The QA team transitioned to an on-demand, consultative role.

Change management with empathy

One of our most crucial realizations was that technical changes, even when well-intentioned, can feel threatening if they’re not handled with empathy. When we proposed that QA involvement would become more ad hoc, we weren’t just describing a process change. We were fundamentally altering how an entire team saw their role at Quo.

We learned that successful change management requires:

  • Transparent communication: We didn’t just announce the change; we explained why. Every stakeholder needed to understand not just what was changing, but how it would benefit both them and our users.
  • Collaborative documentation: Rather than top-down mandates, we involved all teams in creating the new processes. This ensured they felt ownership over their evolving role.
  • Gradual transition: We didn’t flip a switch. The rollout happened in phases, with plenty of time for questions, concerns, and adjustments. Teams could test the new workflow while still having the safety net of familiar processes.

Cross-team orchestration

The shift also required unprecedented coordination between frontend and backend teams. Instead of operating independently to push our respective releases to our codebase, we had to work together to provide release parity. It became a communication effort for our teams, not a technical one.

This meant product managers, engineers, and designers all needed to think differently about timing and dependencies. Feature flags became the lingua franca that allowed teams to work independently while maintaining system coherence.

The transformation in practice

Here’s how the transformation played out across our organization:

  1. Instant visibility: No more mysterious deployment states. Everyone sees exactly what code is running where. Teams aren’t asking which features are in which environment anymore. 
  2. Product-driven releases: Product teams now control when features go live through feature flags, not engineering release schedules. This independence dramatically reduced coordination overhead and gave product managers the agility they needed.
Using feature flags in production
  1. Simplified mental models: Instead of tracking multiple branches, deployment windows, and complex merge processes, teams work with a simple model: merge to main, and your code deploys automatically.

Proactive communication and ownership

The new model also demanded higher standards from everyone:

  • Developer ownership: With every PR going directly to production, engineers needed to embrace a higher level of responsibility. Code reviews became more thorough, testing requirements became stricter, and the bar for merge readiness rose significantly.
  • Proactive quality: Instead of reactive QA cycles, quality became a continuous concern. To enable a more proactive QA approach, we deploy the current state of the application on every PR to a Cloudflare page for testing. Issues can be caught and fixed immediately with this approach. It also gives the team a more relaxed schedule to conduct a comprehensive code review. . 
  • Communication excellence: Without the coordination imposed by weekly releases, teams needed to communicate more intentionally about dependencies and timing.

Lessons in change management

Looking back, the technical implementation was the easy part. The real work was in helping people understand that their evolving roles represented growth.

We discovered that fear of change often stems from uncertainty about the future. For the Web and QA teams, there was uncertainty about whether we would be shipping more bugs with each release. We were changing our QA team’s process and their role within the team. By involving them in designing the new processes, we transformed potential resistance into active participation. The QA teamdidn’t just accept the new model; they helped create it.

Our extensive documentation wasn’t just about process. It was about giving people confidence. When someone could reference exactly how our branching strategy worked, or what their pull requests should adhere to, they felt equipped rather than adrift.

The impact of hourly CD

So far, the transformation has exceeded our expectations. Here’s how:

  • Massive velocity gains: We went from releasing once a week to nearly 40 times per week, depending on what changed in the codebase. This shift drastically shortened the feedback loop between writing code and delivering value to customers.
  • Accelerated test coverage: The new workflow drove a 20x faster test growth rate, resulting in a 100% increase in test coverage within just three months. Teams now write and run more tests with confidence, reinforcing quality across every deployment.
  • Reduced complexity: Teams no longer need to track deployment schedules, coordinate release windows, or navigate complex branch merges. The cognitive overhead of shipping software has dropped dramatically.
  • Eliminated blockers: Code fixes now deploy in under an hour—or on demand—not by the next weekly window. Product features can be enabled for testing immediately after merge, regardless of what else is in progress.
  • Improved team dynamics: With clearer ownership boundaries and simpler processes, teams collaborate more effectively. Product and Engineering spend less time on coordination and more time building great features.
  • Enhanced quality: Counterintuitively, moving faster made us more stable. Smaller, more frequent releases are easier to test, easier to rollback, and create tighter feedback loops between code changes and user impact.

Looking forward: Culture as infrastructure

This transformation taught us that the most important infrastructure isn’t technical — it’s cultural. Building systems that can evolve requires building teams that can evolve too.

As we continue to grow, we’re applying the same people-first approach to other technical challenges. Whether it’s adopting new technologies, scaling our systems, or evolving our processes, we’ve learned that empathy, transparency, and collaboration aren’t just nice-to-haves. They’re prerequisites for sustainable technical progress.

5/5 - (2 votes)

Explore this content with AI:

ChatGPT Perplexity Claude Google AI Mode Grok