The ultimate software development tool

Best practices on project management, issue tracking and support

Category: Teamwork

Direction-Autonomy Model

Management is not about being in charge.

We usually associate the term “manager” with people managers. At first glance, a people manager’s job consists of bossing people around – telling them what to do. For example, engineering managers no longer write much code, but rather focus on organizing teams.

But a typical engineering team has a second manager – a product manager. The PM typically lacks formal authority. They are responsible for defining some work that should be done, but can’t actually tell anyone what to do or how to do it.

And yet, the PM is still a manager.

The responsibility of management is not simply making decisions and then passing them on to someone else for execution. The responsibility of management is to create an environment in which decisions can be made and work can happen. A manager does not need hiring and firing authority in order to do that. In fact, John Cutler’s litmus test for what makes a good manager is precisely the reverse – to ask “if your team could choose whether or not to hire you, would they?”

Dimensions of Decision-Making

And yet, the PM needs to make decisions. What’s more, those decisions need to stick. Nothing undermines team confidence like flip-flopping between ideas and making it impossible to plan ahead. This is the real value that a manager provides to the team – the ability to set the course and keep it steady, guiding efforts towards some useful outcome.

But it would be a mistake to model this on one axis, with decisiveness on one side and indecisiveness on the other. There are two components that contribute to the “stickiness” of a decision – direction, and autonomy.

direction autonomy quadrant


Imagine your typical product manager. Let’s call them Pip. Pip’s day-to-day involves communicating with the business to keep tabs on the strategy and metrics handed down from more senior management. Pip also takes requests coming in from other stakeholders. Before asking the team to start working, Pip will usually take some time to understand the problems more deeply and figure out what a solution might look like. The fidelity of that solution – informed by Pip’s understanding of the problem and ability to define how to address it – represents the magnitude of the direction that Pip is providing to the team.

Thus, direction flows from the organization, through Pip, into the team. If Pip is providing a low magnitude of direction, they might be creating open-ended backlog tickets, or asking to improve product-level metrics that could be raised through many different kinds of intervention. If Pip is providing a high magnitude of direction, they would write highly detailed specifications and measure success through very specific, feature- or screen-level metrics.


If direction is the fidelity of Pip’s requests to the team, autonomy represents the extent to which the team feels empowered to define their own work. A highly autonomous team can challenge Pip’s requirements and understanding of the problem. A team with low autonomy is compelled to deliver only solutions that the PM has already defined.

Autonomy is a function of the balance between Pip’s team-facing influence and org-facing influence. If Pip’s ability to influence the organization is low, all they can do is take orders from more influential colleagues, and try to fulfill them. If Pip is influential in the organization and can manage stakeholders effectively, they can give the team a high level of autonomy. If Pip trusts in the technical and problem-solving expertise of their team, they can provide air cover against the rest of the org, while the team investigates the problem and potential solutions.

The Four Quadrants

The combination of these two dimensions gives us four possible operating modes.

A skilled PM knows how to shift between them to achieve the current most important goal efficiently. However, each operating mode has a failure mode associated with it, where autonomy and direction are not sufficiently balanced, and the team’s decision-making ability breaks down.


A team is operating in the first quadrant if they have no external direction, and no autonomy. All they can do is dither aimlessly. Without either a solid direction to follow or the autonomy to find some way forward themselves, the team has no way to make progress on goals because they don’t have any stable goals in the first place. This usually happens when business leadership has failed to adequately set product goals.

direction autonomy quadrant dithering

Like the rest of the team, the PM is not contributing a lot to the organization. Pip spends most of their time copying stakeholder requests from emails to tickets. Priority can change at any time, so Pip can’t commit to studying the problem in more detail. Without a good understanding of long-term goals, and lacking any success metrics, Pip’s only real option is to double down on output. Melissa Perri outlines in detail how to escape this “build trap.”

Pip might be stuck in this quadrant if they or their management are afraid of risk. Risk-reducing strategies are key to moving away from this mode of work; teams should focus on increasing the frequency with which they ship, reducing the scope of each release, and gathering metrics from each iteration. If a release fails to deliver on the desired metrics, it’s not a failure, but a sign to try something else. Releases should be treated as experiments, and an experiment can never fail – it merely produces a negative result, which still contributes to the team’s understanding of the problem space.


If the org is willing to take some risks, Pip can start placing bets on which solutions might make an impact on important problems. If Pip has not secured much autonomy for the team, but is able to give a high magnitude of direction, it’s easy to make a decision and then stick to it. There is only once decider. The team focused on delivering to the provided specifications without being encouraged to push back or question the effectiveness of the prescribed solution.

direction autonomy quadrant discovery

This mode of delivery can work if the team is tackling a well-understood problem. If Pip has the necessary domain knowledge, or has help from outside the team, the engineering team will have everything they need in order to execute the plan. This is the working mode most commonly associated with waterfall delivery models, as well as Scrum. If the business relies on consultants and agencies, or is structured around domain-specific silos, this may be the only choice they have.

The results of working this way hinge entirely on how well the problem was actually understood. Often, the answer is “not as well as you thought”. If the team has sufficient autonomy to pivot away from the prescribed solution, they can still deliver some value, regroup, and save the day. It pays to give the experts room to reach the right outcomes, rather than over-prescribe outputs.

But if the PM has not given the team any autonomy at all, and turned them into nothing more than pixel-pushers, then there is no way to alter course. This mode of delivery becomes nothing more than ruinous micro-management. By trusting the specs more than the team, the PM dooms the entire project to failure.


PMs often try to avoid the above outcome by making a really, really good plan. A perfect plan that will anticipate every problem. But “perfect” is the worst enemy of “good enough” – and so the chase for the perfect plan usually proves futile. Not in the least, this is because a perfect plan has to involve a very low-level definition of what needs to be built, and few product managers are interested in, or capable of, getting that technical. PMs trying to anticipate risks are better off working with the engineering team and leveraging their technical expertise.

direction autonomy quadrant direction divergence

If Pip trusts the team to design a good solution, they only need to provide a small amount of direction. As long as the team understands the objectives of the project, they will be sufficiently empowered to explore many possible options, and iterate towards an outcome that moves the needle in the right direction. This is the mode you would expect to see in Lean teams, or cross-functional teams that have their own embedded designers, user researchers, and so on.

Because this mode of working has very high uncertainty associated with it, Pip needs to give the team a tremendous amount of air cover. If Pip can’t convince stakeholders that research is necessary in order to ultimately deliver better quality outputs, it can spell trouble for the team’s reputation. And without a certain amount of guidance from Pip, the team’s efforts will spin out of alignment and cease to make progress towards the end goal. Frequent iteration and a laser focus on learning are must-haves if you want to operate in the Discovery mode and not wipe out.


Most of the time, Pip will probably have neither the sway to run in an unconstrained Discovery mode for long, nor the time to put together a spec detailed enough to focus exclusively on Delivery. A balance between autonomy and direction will result in a synthesis of the two other modes.

direction autonomy quadrant synthesis

A team in this mode is empowered, but not autonomous. They have the strategic direction they need from Pip, and the freedom to make tactical decisions. The team has an understanding of the most important metrics that the project needs to push, and Pip has engaged the appropriate experts in order to figure out the general shape of a solution that might push those metrics most effectively. This is fairly typical of an organization where Agile is practiced correctly.

But when a strict top-down direction comes up against an empowered team driving a bottom-up initiative, the result can be simply unproductive conflict. PMs facing a deadlock between strategic and tactical needs must tread carefully, or else leave feelings hurt on all sides.

Moving between zones

As a project progresses, conditions around the team’s work will likely change. If the team has been shipping frequently and gathering user feedback, the need for discovery will progressively decrease. Deadlines will loom closer, and transitioning into a delivery-focused mode will help meet them.

But early on in a project, it will usually be more beneficial to do the opposite: secure as much autonomy as possible for the team to get a feel for the problem, explore fruitful directions for solving it, and maybe even push back against the initial definition. Cutting the period of discovery prematurely may significantly limit the impact that the ultimate output will make.

direction autonomy quadrant discovery


Pavel Samsonov is a Product Designer at Bloomberg Enterprise. Interested in collaboration between human and machine actors, Pavel is currently designing expert user experiences for Data License products across the GUI, API, and Linked Data platform.

You can connect with Pavel via Twitter:

Code Review Best Practices

As developers, we all know that code reviews are a good thing in theory. They should help us:

  • Find bugs and security issues early
  • Improve the readability of our code
  • Provide a safety net to ensure all tasks are fully completed

The reality is that code reviews can frequently be an uncomfortable experience for everyone involved, leading to reviews that are combative, ineffective, or even worse, simply not happening.

Here is a quick guide to help you to create an effective code review process.

WHY do we do code reviews?

The first question to answer when reviewing your code review process is: what’s the purpose of our code reviews? When you ask this question, you soon realise that there are many reasons to perform code reviews.  You might even find that everyone in the team has a different idea of why they’re reviewing code. They might think they’re reviewing

  1. To find bugs
  2. To check for potential performance or security problems
  3. To ensure readable code
  4. To verify the functionality does what was required
  5. To make sure the design is sound
  6. To share knowledge of the features that have been implemented and the updated design
  7. To check the code meets standards
  8. …or one of hundreds of other reasons

If everyone in the team has a different “why”, they’re looking for different things in the code. This can lead to a number of anti-patterns:

  • Code reviews take a long time as every reviewer will find a different set of problems that need to be addressed
  • Reviewers become demotivated as every review throws up different types of problems depending upon who reviews it
  • Reviews can ping pong between the reviewer the code author as each new iteration exposes a different set of problems

Having a single purpose for your code reviews ensures that everyone taking part in the review, whether they’re a code author or a reviewer, knows the reason for the review and can focus their efforts in making sure their suggestions fit that reason.

WHAT are we looking for?

Only when we understand why we’re doing the review can we figure out the things we want to look for during the review. As we’ve already started to see, there are a huge number of different things we could be looking for during our review,  we need to narrow down the specific things we really care about.

For example, if we’ve decided the main purpose of our reviews is to ensure the code is readable and understandable, we’ll spend less time worrying about a design that has already been implemented, and more time focusing on whether we understand the methods and whether the functionality is in a place that makes sense. The nice side effect of this particular choice is that with more readable code it’s easier to spot bugs or incorrect logic. Simpler code is often better performance too.

We should always automate as much as possible, so human code reviewers should never be worrying about the following:

  • Formatting & style checks
  • Test coverage
  • If performance meets specific requirements
  • Common security problems

In fact, what a human reviewer should be focusing on might be fairly simple after all – is the code “usable”? Is it:

  • Readable
  • Maintainable
  • Extensible

These are checks that cannot be automated. And these are the features of code that matter most to developers in the long run.

Developers are not the only people who matter though, ultimately the code has a job to do. Our business cares about: does the code do what it’s supposed to do? And is there an automated test or set of tests to prove it?

Finally, does it meet so-called non-functional requirements? It is important to consider things like regulatory requirements (e.g. auditing) or user needs (like documentation) if these checks.

WHO is involved in code reviews?

With a clear purpose and a set of things to be looking for in a review, it’s much simpler to decide who should be involved in the review.  We need to decide:

Who reviews the code? It’s tempting to assume that it should be one or more senior or experienced developers.  But if the focus is something like making sure the code is easy to understand, juniors might be the correct people to review – if an inexperienced developer can understand what’s happening in the code, it’s probably easy for everyone to understand.  If the focus of the review is sharing knowledge, then you probably want everyone to review the code. For reviews with other purposes, you may have a pool of reviewers and a couple of them are randomly picked for each review.

Who signs off the review? If we have more than one reviewer, it’s important to understand who ultimately is responsible for saying the review is complete. This could be a single person, a specific set of people, a quorum of reviewers, specified experts for particular areas of the code, or the review could even be stopped by a single veto. In teams with high levels of trust, the code author might be the one to decide when enough feedback is enough and the code has been updated to adequately reflect concerns that were raised.

Who resolves differences of opinion? Reviews may have more than one reviewer.  If different reviewers have conflicting advice, how does the author resolve this? Is it down to the author to decide? Or is there a lead or expert who can arbitrate and decide the best course? It’s important to understand how conflicts are resolved during a code review.


When has two important components:

When do we review? Traditional code reviews happen when all the code is complete and ready to go to production. Often code will not be merged to trunk/master until a review is complete, for example the pull request model. This is not the only approach. If a code review is for knowledge sharing, the review could happen after the code has been merged (or the code could be committed straight to master). If the code review is an incremental review that is supposed to help evolve the design of the code, reviews will be happening during implementation.  Once we know: why we do reviews; what we’re looking for; and who takes part, we can more easily decide when is the best time to perform the review.

When is the review complete? Not understanding when a review is complete is major factor that can lead to reviews dragging on indefinitely. There’s nothing more de-motivating than a review that never ends, a developer feels like they’ve been working on the same thing forever and it’s still not in production. Guidelines for deciding when the review is complete will depend upon who is taking part in the review, and when the review is taking place:

  • With knowledge sharing reviews, it could be signed off once everyone has had a chance to look at the code
  • With gateway reviews, usually a single nominated senior (the gatekeeper) says it’s complete when all their points are addressed
  • Other types of reviews may have a set of criteria that need to be passed before the review is complete.  For example:
    • All comments have been addressed by fixes in the code
    • All comments have either led to code changes, or to tickets in the issue tracker (e.g. creating tickets for new features or design changes; adding additional information to upcoming feature tickets; or creating tech-debt tickets)
    • Comments flagged as showstoppers have all been addressed in some way, comments that were observations or lessons to learn from in the future do not need to have been “fixed”

WHERE do we review?

Code reviews don’t have to happen inside a code review tool. Pair programming is a form of code review. A review could be simply pulling aside a colleague and walking through your code with them. Reviews might be done by checking out a branch and making comments in a document, and email or a chat channel. Or code reviews might happen via GitHub pull request or a piece of code review software.


There are lots of things to consider when doing a code review, and if we worried about all of them for every code review, it would be nearly impossible for any code to pass the review process. The best way to implement a code review process that works for us is to consider:

  • Why are we doing reviews? Reviewers have an easier job with a clearly defined purpose, and code authors will have fewer nasty surprises from the review process
  • What are we looking for? When we have a purpose, we can create a more focused set of things to check when reviewing code
  • Who is involved? Who does the reviews, who is responsible for resolving conflicts of opinion, and who ultimately decides if the code is good to go?
  • When do we review, and when is the review complete? Reviews could happen iteratively while working on the code, or at the end of the process. A review could go on forever if we don’t have clear guidance on when the code is finally good to go.
  • Where do we review? Code reviews don’t need a specific tool, a review could be as simple as walking a colleague through our code at  our desk.

Once these questions are answered we should be able to create a code review process which works well for the team. Remember, the goal of a review should be to get the code into production, not to prove how clever we are.

Improve Your Culture With These Team Lunch Tips from 20 Startups

The importance of eating together has long been recognized in positive child development and strengthening family bonds. Eating together is a great equalizer and it can be a good way to help form better and more valuable relationships amongst teams of co-workers too.

I would encourage the companies to have rows of long tables. Having round tables means that when looking for a place to sit, you have to pick a group of people. But with long ones you just go and sit at the end of the row. You end up speaking to different people every day, helping to avoid cliques. It’s good for new hires too – they don’t have to sit alone or force themselves upon an unfamiliar group.

Like StumbleUpon, AirBnB, Eventbrite and others, you may have the lunch catered. It would be served up at the same time every day so everyone knows when there will be people around to go eat with. For the foodies amongst you, the employees would be sharing photos of some of the tasty dishes on company Facebook page.

Others, like MemSQL and Softwire have hired in their own chefs. And of course, there are the likes of Facebook, with their own on-site Pizza place, Burger bar and Patisserie, and Fab, who have their Meatball Shop and Dinosaur Bar-B-Que.

It Doesn’t Need to be Expensive

It doesn’t need to be expensive though – you don’t have to provide the food, people can bring their own lunch. The important part is the set time and place to eat together. Make them optional, so that people don’t feel obligated and can get on with critical work if need be.

If space is a problem, then eat out. A group at Chartio for example, eat together at a different place in San Francisco every day.

Can’t do it every day? No problem. Take Huddle, they have a team lunch once a week. FreeAgent do tooand they keep things interesting by picking a different cuisine from around the world each time.

Stay Productive

TaskRabbit, Softwire and have their ‘Lunch and Learn’ sessions. One team member presents on a particular topic of interest, whilst the rest munch away. Twilio use their team lunches for onboarding new hires, who demo a creation using their API to colleagues in their first week.

Small Groups or the Whole Team

It doesn’t have to be the whole team either. Warby Parker, for example, has a weekly “lunch roulette,” where two groups of team members go out and share a meal. HubSpot allows any employee “to take someone out for a meal that they feel they can learn from”.

Get Creative!

There are many creative ideas, too. Shoptiques provides lunch with its Book Club, LinkedIn gets in food trucks every Friday, and GoodbyeCrutches have themed lunches – “Jimmy
Buffet Day, Smurf Day, and Pirate Day” being amongst their favorites.

No Excuses

You don’t even need to be in the same country! Crossover holds virtual team lunches where its employees from the US, Russia, Brazil, Romania, Turkey, Uruguay and India gather together and eat whilst in a Zoom meeting room.

So there you go, there’s no excuse to have another sad lunch, sat alone at your desk reading some random blog post…

How have you improved team culture at your workplace? Tweet your tips to @fogbugzteam and we’ll re-tweet the best ones.


Why Your Retrospectives Have Become Unproductive?

Retrospectives provide teams with an opportunity to reflect. They’re an opportunity to discuss what is working and what isn’t with the goal of iterative improvement. The meetings should create a safe environment for team members to share and discuss processes and practices constructively so they could come up with actions to resolve problems or improve how the development team functions.

Yet, often this isn’t the case — retrospectives break down, become unproductive or just don’t happen at all.

Here are 3 core failings with retrospectives, along with potential causes and remedies:

1. Retrospectives That Don’t Lead to Real Change

The desire for continuous improvement is at the heart of retrospectives. The feedback gathered during the meetings should result in action items. These action items, upon completion, should deliver positive change. But if the action items aren’t completed or the true cause of problems is not identified, then the faith in the process can wane.

This can come about for a few reasons:

  • Too many action items

It’s important that you don’t try and tackle too much and fail to make real progress with any of them.

  • Items are vague or have no clear resolution

The action items you create need to be specific and have a definitive end point. Items like ‘improve test coverage’ or ‘spend more time refactoring’ lack specificity and need to be quantified. Concrete action items provide demonstrable results — allowing the team to see and feel the improvements achieved by following the process.

  • A lack of responsibility for actioning items

Often the facilitator can end up with all the issues or items are assigned to groups of people. This is a mistake — each item should have a dedicated owner who is in charge of ensuring to get it done, even if a team would be completing them.

  • Too much emphasis on technical issues

Working with tech is what we do so identifying problems about systems, servers, libraries and tooling are easy. But you need to ensure that you give just as much attention to working practices, communication, and people problems. Otherwise, these key impediments to improvement will hold you back.

Whatever the reason is, it’s important that you’re completing the action items. So prioritize them and focus on just a handful of items that you know can be done before the next retrospective. Break down larger issues so that you can begin to make progress with them too. Track items raised at previous retrospectives and review results in each session. This sense of momentum helps to build and maintain a belief in the process and fuel future improvements.

2. Retrospectives That Don’t Occur Often Enough

If retrospectives don’t happen often enough, it can cause a number of knock-on effects:

  • Too much to go over in any one retrospective

This results in meetings that fail to get to the cause of issues. Or due time isn’t spent on issues important to attendees, which can be disheartening.

So much has changed since the items were identified that the issues raised are no longer a priority. This doesn’t mean they aren’t important. More often, it just means you’re compounding them with others and you’re missing an opportunity to improve.

3. Lack of Participation in Retrospectives

This can often happen if the meetings aren’t managed effectively:

  • Sessions are long-winded

You should decide on the agenda before the meeting to avoid straying off the topic and unfocused discussion. You might want to consider time-boxing sections of the meeting. This helps to ensure discussion on one section or type of problem doesn’t consume all available time and attention, and that all areas get adequate attention.

  • Sessions have become stale

Change things up. Try removing seating for one session, so people don’t just sit back and switch off mentally. Or change the format so you aren’t just repeating the same old questions. There are plenty of different techniques: from the Starfish and 4Ls to FMEA if you decide to deep-dive on a specific issue. Or just pair off and discuss items to bring back to the group. Some people open up better in smaller groups. And one-to-one force a conversation, otherwise, things get awkward.

  • There’s a lack of trust

A lack of participation can also result from a breakdown in trust. You should only invite team members to take part. Observers, especially management, despite noble reasons for attending, should be dissuaded from doing so. It may seem to make sense to share feedback so that other teams can learn from it too. But the specifics should be kept in the room. People might not contribute if they know there will be a long-term record of it, or if attributable comments are shared. Just share the action items or areas you’re looking to improve.

  • Sessions are too negative

Retrospectives should encourage introspection and improvement. But this doesn’t mean it’s just a time to moan. It can be too easy to focus on the things that aren’t working or you failed to do. So it’s important to make an effort to highlight improvements, and not just from the last iteration but over time too.

With a few changes and a renewed commitment to the process, retrospectives can be a great way of ensuring you’re constantly improving. They can be an important part in making the working lives of your developers better and more productive.