People & Process

Using Git analytics for effective & kind 1:1's

5 min read
Two excited team members chat with illustrated dialogue boxes in the background

Hi! 👋  I’m Jenny, a data scientist at Multitudes. 

Before working here, I’d never really seen a culture analytics tool like Multitudes, although I’d done plenty of employee wellbeing surveys, agile team retros, and 1:1’s at previous companies I’d worked for. As an individual contributor (IC), I was a bit unsure about how a tool like this could bring value to ICs like me. 

I didn’t want some poorly defined metrics to change our team dynamics for the worse, or for it to incentivise unproductive, contrived behaviours when we’re all just trying to get work done. Besides, I’d been lucky enough to be part of some wonderful, high performing teams; from what I could tell, it didn’t feel like anyone was struggling, or that we needed a new tool to help us improve.

Six Multitudes team members standing in a variety of fun poses on a beach.
Jenny Sahng (kneeling bottom left) and the Multitudes team meet up in Whangamata for a team week in June 2021.

I initially came on board because I resonate with our vision of making equity the default. However, now that I've used Multitudes myself, it's cemented my belief that this is a tool for equitable change in the workplace. As a remote and distributed team ourselves, it’s especially important to understand how we’re doing and to make data-driven decisions when we experiment with team processes. By “dogfooding” our own product, not only do we get first-hand awareness of user pain points, but we also test out the benefits of Multitudes in our own workplace. 

Where other tools use Git metrics to measure individual performance, Multitudes is more interested in the human dynamics behind the work. I’ve found pull requests (PRs) to be a little like petri dishes for team dynamics, since they require collaboration, feedback, and support between team members. Instead of measuring how much work people are doing, we wanted to measure how people are working together, so that we can incentivise effective collaboration.

How we use Multitudes at Multitudes

We use Multitudes in our fortnightly 1:1s with Lauren, our CEO (we’re a small and flat team 😊). Each 1:1 is structured around a series of questions, and part of the meeting includes going through the Multitudes app together. Lauren and I have access to the same view, so one of us shares our screen and then we discuss what we see.

Flow of work metrics motivate me to keep my PRs concise

Flow of work metrics tell us about how our processes are helping our team’s performance. A key metric here is Time To Merge (TTM). This is the time from PR creation to PR merge.

A timeline with labels for "Created" and "Merged" at the extremes. The time between the two points is Time to merge (TTM).

We chose TTM because we wanted to focus on teamwork, not meaningless quantity measures like “lines of code changed” (umm, package-lock.json, anyone?) or “number of PRs merged”. I was keen for a metric that helped me minimise long, drawn-out PRs, which aren’t fun for anyone, especially the PR author.

Sustainable performance comes from teamwork, not individual heroism. For this reason, Multitudes aggregates TTM at the team level, not the individual level. This is so that these performance-related metrics can’t be used as a proxy for individual contribution, since contribution is a multi-faceted thing that shouldn’t be reduced down to a simplistic number. Also, if there were any big spikes on the TTM chart next to my name, I might be put in the awkward position of justifying why some of my PRs took a long time to merge, when it might have been due to factors outside of my control, like waiting for feedback or getting interrupted by customer support work.

What Multitudes does help with is diagnosing the issues that might be increasing a team’s time to merge – for example, how long people have to wait to get feedback. Since Multitudes gives visibility into how much time is spent in this waiting phase, it’s been a good way to experiment with how we request and receive reviews. You can read more about how we give insights on this waiting phase in our blog post on what we measure and why.

There are also aspects of a PRs merge time that the author can control. This data has led to retro actions like trialling a maximum line change limit on our PRs, and being more rigorous about breaking up large stories. Our team has really enjoyed these process changes so far – small PRs are a joy to review, and they speed up our lead time considerably. I’ve especially noticed that I’ve become more disciplined about keeping my PRs clear and concise, and that I’ve been feeling a greater sense of achievement from the smoother, quicker reviews that result.


Wellbeing metrics have my back if I’m doing a lot of long hours

Our Wellbeing section shows the amount of work that’s being done outside of my usual working hours. This is one of my favourite metrics, especially as an IC. Bringing up how much work I’ve been doing out of hours (OOH) can be a bit awkward – I don’t want to sound like I’m fishing for sympathy or bemoaning the work. Sometimes things just have to get done, and I’m part of the decision-making around why certain timeframes are tight. But if it’s been happening for a few weeks, I do want it to be rectified before it becomes a trend.

1:1's without wellbeing metrics: "So what's on your mind?" "Uh huh! Oh! Uhh.. Nothing much! All good!"

Because Lauren and I have a regular routine of going through the Multitudes metrics together, she will see and recognise the OOH work before it becomes an issue. If our app shows that any of us have been doing a lot of OOH work, we will talk about how this happened and create action items around how we can better plan or rebalance our workload to avoid it in the future. If it’s been a really big sprint, Lauren will often tell us to take an afternoon off too – as a way to recuperate after a period of lots of screen time and perhaps not much sleep! Tracking out-of-hours work helps my manager and I work together to make sure I have more balance.

Collaboration metrics tell my manager whether I’ve got the support I need

On our collaboration page, we look at whether everyone is giving and receiving enough support. As someone who tends to be on the more talkative side, these metrics remind me to invite feedback on my PRs from quieter team members whose input I always value.

The feedback flows chart on this page is also really fun to look at! It shows me who I’ve been talking to in PR reviews and comments. Lauren especially finds this helpful as starting points for discussions in our 1:1’s, as it shows her an overview of how our team is working together – especially important given that we’re all remote!


A Sankey chart showing lines going from left to right. There are five nodes on each side, with github usernames of our team members on each. The left hand side is the names of the people who gave feedback, and the right hand side are people who received feedback. The thickness of the lines show how many comments were given. The left hand node "Jenny S" is highlighted, showing a thick line to "kanocarra" on the right, and thinner lines to "dannash100", "Vivek K", and "mike247".
As you can see, I am indeed one of the talkative ones, or at least I was this month! 😅 The number shows how many comments (including review comments) I’ve written to various PR authors on my team.

Feedback flows also provide a good opportunity for us to think about whether everyone is well-supported. If someone is not receiving very much feedback, are they getting enough support to grow and become a better developer? These are important questions that this chart can prompt, especially if you have junior developers or new team members who are onboarding.

Research also shows that people from marginalised groups (e.g. women) tend to get less feedback, and less useful feedback that promotes growth (source). This is why measuring feedback is one way for teams to ensure that their day-to-day collaboration is equitable. For me, these insights have prompted more pairing sessions with team members who have been receiving less feedback. They have also been a good reminder for me to specifically request reviews (and therefore opportunities for learning) from colleagues who are more experienced, if I’m working alone in an unfamiliar codebase or framework.

Multitudes gives me a starting point for open, honest discussions

What I like about Multitudes is that it provides data-driven starting points for team and 1:1 discussions. It’s not prescriptive about how you respond to a particular metric, and it lets you bring your own team’s context to the interpretation of the data. I love that it elucidates bottlenecks that are hard to otherwise see and gives good prompts for areas where I can personally improve. Since both my manager and I are looking at the same data and bringing our unique context to the conversation, it makes 1:1s more open, honest, and comfortable. I’m excited to be building a tool that makes teamwork better for all individual contributors - talkers, listeners, newbies, veterans, non-native English speakers, people of colour, queer folks, women, and more. Let us know what indicators you’d want to discuss in your 1:1s for thriving, sustainable team collaboration!


Contributor
Jenny Sahng
Jenny Sahng
Data Scientist
Support your developers with ethical team analytics.

Start your free trial

Join our beta
Support your developers with ethical team analytics.