How to Track Your Team’s Adoption of Preview Driven Development
Preview driven development is a promising way forward for most development teams to help shorten review cycles and cross-pollinate the best ideas – all by shipping more.
But if your team is adopting preview driven development, how can you monitor success? How can you track your team’s progress toward complete adoption and determine how well you’re following this approach? Here’s what you need to know.
The single most important metric for preview driven development: Deployments
As with anything else, the best place to go when it comes to tracking adoption of preview driven deployments is straight to the data. In this case, there’s a single metric that stands out as the most important to understanding your team’s progress: How many deployments per day your team is doing.
To be clear, the number of deployments your team makes each day is not a perfect metric by any means. However, it does encapsulate a lot of the essence of preview driven development, which is to shorten feedback loops and increase team collaboration.
That’s because each deployment usually represents one iteration of a feedback loop and communication within the development team. Someone developed something, someone else left feedback, changes were made and a new version was deployed. This process suggests that the more you deploy, the more the team is talking and working together. On the other hand, if your team doesn’t talk to each other, you likely won’t have many deployments.
Why deployments are so important to preview driven development
Digging deeper, deployments are so important to preview driven development because they get to the heart of two things that tend to go wrong when writing software:
Founders and product owners know first hand the pain of thinking they’ve had a meeting of the minds, only to be surprised later by the cognitive dissonance that shows up in an implementation. When this happens, many organizations revert back to a waterfall methodology with more detailed specifications.
However, this is not the best approach, as it’s usually not a lack of details that leads to the disconnect. It’s actually the fact that there’s too much time in the process between when the scope was specified, when someone worked on it and when they actually got feedback.
Instead, the best solution to dealing with these issues is to shorten the feedback loop. Preview driven development accomplishes this objective by asking developers to open pull requests as soon as something is stable enough to be deployed. This gives everyone on the development team a chance to see where things are heading and provide proactive, early feedback. As a result, this approach can ease the impact of misunderstandings and miscommunications.
How to help your team increase their deployments
If increasing the number of deployments per day is key to a successful adoption of preview driven development, how can you help your team achieve this goal?
To start, there are three common behaviors that can delay deployments that are important for you to help your team nip in the bud:
1) Avoid long running development branches
Long running development branches typically occur when developers are working independently for long periods of time. These long running branches are harder to review, harder to merge and often borne out of a failure in planning to break down the work.
They’re also a sign that collaboration is lacking, since long running development branches typically mean someone is making decisions in isolation rather than fact-checking with the team and giving consideration to the entire group.
As a result, it’s important to encourage your team to avoid these long running branches and instead open pull requests as soon as what they’re working on is stable enough for deployment.
2) Eliminate disagreements about review feedback
Disagreements in the review process can cause significant delays. These disagreements typically come from two places:
Arguing about stuff that does matter: This occurs when a reviewer leaves feedback and the author doesn’t want to address it. Unblocking these arguments requires your team to place an emphasis on always valuing the reviewers’ time. You can help grease the wheels here by having developers provide context to reviewers and ensuring they defer to the reviewer (e.g. if the reviewer thinks something is unclear, the developer should trust them rather than arguing with them about it). For even more on this topic, Michael Lynch wrote a great piece on valuing reviewer time.
Arguing about stuff that doesn’t matter: This generally occurs over stylistic choices that don’t have a real user impact. In these cases, you need a coding standard to avoid any bikeshedding. Automated CI checks are a great way to enforce these standards without having humans constantly deliver bad news. This way, everyone on the team can point negative feelings toward the robot rather than being irritated by one another – which can cause consternation and stifle collaboration.
3) Start getting more feedback from the team
Finally, not getting enough feedback will delay deployments. Resolving this problem requires adopting a mindset that welcomes reviews.
To do so, you can codify review practices into the coding standard, that way you help people understand areas where they can and should give feedback. Additionally, you should let all new hires know they should feel comfortable giving feedback and help them understand the areas where they should do so in code reviews.
Ready to get started with preview driven development?
If you’re ready to get started with preview driven development, you’ve come to the right place. Reach out to us today to learn how Flowerwork can help your team embrace best practices like those covered here to increase deployments in a way that shortens feedback loops and improves collaboration.