Saturday, February 27, 2016

Multi-tasking and context switching

We all want our teams to be 100% utilized. Why hire someone if they only will be working 75% of the time?

As a result, many managers ensure individuals on their team always have a full plate of tasks to complete. However, the size of a task rarely is calculated with context switching costs in mind.

Context switching costs
Neurologically, our brains are not wired to be multi-taskers. The brain simply switches from one task to another rapidly.

That shift comes with a cost. Our brains take time to re-adjust and focus on the new task.

Some neurologists say 10-20% of the time a task takes to complete can actually be wasted through context switching.

Take a Java developer for example. They may have a lot of variable names, functions, API's to remember for a particular product. Moving to another product requires almost a clean wipe of their short-term memory, and a gradual ramp-up of the knowledge of the new project.

Therefore, the goal is not piling on many tasks to ensure 100% utilization. Rather, it is to improve productivity through reducing context switching.

Reducing context switching
Ultimately, working on one thing at a time is the most effective way to reduce context switching. Other tips to consider:

  • Asynchronous communication -- When things are not urgent, writing an email, for example, may be better than an IM or phone call. This allows the individual to stay focused on the task and respond when they are ready.
  • Setting aside time -- Following on the item above, emails can become a distraction. Dedicating a set block of time during the day for emails or other non-project related work can help keep focus high when doing project work.
  • Set expectations -- Let your colleagues know at certain times you may not be quick to respond. This can help take pressure off you to feel the need to constantly check emails.

Agile plays a huge part in minimizing context switching from a software development perspective. More on Agile in a future post.

The Kanban approach is also useful here. It stresses the important of a work in progress limit. Each individual can only work on a certain amount of tasks each day. Any more than that is added to the backlog. All tasks are prioritized, so there is no slow-down to figure out what to work on next. Our DevOps engineer uses Kanban, and he has a work in progress limit of two. More on Kanban in the future, too.

Sunday, February 21, 2016

MVP: failing fast is a good thing

How many times has your team had a scenario similar to the following example?

A senior client says a product cannot launch until one particular feature is added. Her reasoning: "No one will ever use this product unless this feature is included."

Although small in size, the feature is quite complex and would require another 2 months of development effort. The team works tirelessly those two months, and launches the product shortly thereafter.

A few weeks after go-live, the analytics demonstrate an unfortunate reality: while the product overall is getting good traction, no customers are actually using the feature suggested by the senior exec.

This scenario is exactly what Eric Ries' book, The Lean Startup, tries to address. In it he assesses how successful software development companies build and launch products.

Minimum Viable Product (MVP)
Ries describes the MVP as the product whose minimum set of features allow for learning from early adopters. Using the MVP, we are able to avoid building products no one wants, and maximize the learning per dollar spent.

The image below takes us through Ries' build-measure-learn feedback loop. An idea is formed and then built and released as an MVP. That MVP contains measurements or ways to pull data which we can learn from. From there, the product team is prepared to act on that data, and pivot or iterate.

Through small increments we can continue to test hypotheses and build a better product by minimizing the time through the feedback loop. If a particular feature or iteration is not successful, we learn early in the process through facts (analytics, metrics, user feedback, etc.).

This means failing fast is a good thing! Validated learning means we do not have to wait months before we find out no one will use a particular feature. We spend more time on things we know the users will want.

Image credit: Eric Ries, TheLeanStartup.com
Putting it into practice
I like to think of MVP as happening at each phase of the software development life cycle, in addition to the product viewed as a whole.

Take the design phase, for instance. Low fidelity mock-ups (think black-and-white, hand-sketched) are key because they speed time through "the loop." The goal is to get feedback fast -- how can you get fast feedback if you're spending time perfecting the shade of blue a button should be?

When it comes to the product owner's vision for features in release planning, how many of them build on top of an unvalidated hypothesis? What can be built and released quickly as an MVP instead?

Teams must be prepared to iterate. This means we cannot launch something and forget about it. We must release our MVP's, analyze the results, and pivot (move in a different direction vs. our hypothesis) or iterate based on our learning.

It is important to remember software has no value until it is in the hands of the user. The MVP approach gets more engaging software to the users faster by adapting incrementally.