Lean Wastes and Software Delivery: Wasted Motion

Share:        

transportation

Transportation in this context refers to the movement of products and materials. For a software development team, this means bytes and ideas. When the transport is hampered, it’s a form of waste. Later, I’ll talk about other kinds of motion.

Thinking about teams, there are a number of infrastructure areas to address. Unless they are all on a beach somewhere, there are some structural considerations: light, heat, plumbing, snack cabinets and so on. If teams have to struggle with unwanted noise or shiver to stay warm, ideas don’t flow as well. Environmental distractions are a waste.

Beyond survival needs, some environments are more conducive to work than others, and ideas on a productive work environment vary by person. Some people find the childlike halls of a Google or Zappos to be idyllic; others prefer the coffee or sandwich shop down the street. Effort spent enduring or overcoming the environment is waste.

Teams need proper tools. On a software delivery team, each person probably needs a computer, and probably one more powerful than an entry-level machine at a big box store. Some team members will need code development tools. Others will need modeling or testing packages. Others may need project management tools. Each of these tools produces files. All these digital assets imply a storage medium, usually a hard drive.

The movement of bytes within a computer can be a source of waste. Hard drives get cluttered and fragmented, and hard drive speeds can make a real impact. I recently benefitted from replacing a standard SATA drive with a hybrid solid-state drive. It reduced the compile and unit test time from about 15 minutes to under 3. Where I would do 20 local builds a day before, I can do triple that now. That means you get your feature in a day or two rather than a week — I’ve doubled my productivity for a few hundred dollars.

People install programs they no longer need. A certain amount of regular maintenance is needed, and hence people put “wear and tear” on their computers. Even fanatics of Apple computers, known for requiring little maintenance, will occasionally “nuke and pave” their systems, wiping the hard drive and restoring a copy of their user settings and application atop a fresh operating system installation. The time spent doing this that could have been used for production is a waste.

In addition, the team members need transportation infrastructure to share their work. In other words, these machines need a way to communicate with each other. This implies a network of some kind, with all the switches, hubs and cables that implies. Stretching the physical goods analogy, if the team is shipping code on a slow network, that’s waste. Early in my career, my colleagues and I developed a habit of checking in our code, then going to the break room and getting some coffee or playing a round of foosball while the code made its way over the network to the server. If you had data files to share, you would put them on a disk and walk them to your neighbor’s cube. In 2013 with adequate network hardware being a commodity, there’s no excuse for that.

And, if the files being transported contain features that are no longer used, or if they are large digital assets where small ones would do, or if the traffic is less valuable — that is, everyone is streaming Pandora instead of production logs — then the bandwidth to carry those assets, the work of the network, it could have carried something else and thus is deemed to be transportation waste.

And I’d be remiss if I didn’t mention that the internet itself, when used improperly, can be wasteful. Companies lose a portion of each work week to non-work-related internet use. While that flexibility might mean that employees do not need to take time away from work to handle quick personal affairs like banking, excessive zeal in making sure cats can have cheeseburgers is probably lost productivity.

Wasted Motion

I’ll admit, there isn’t a lot of people motion that happens in software delivery. Software teams rarely deliver physical goods, they produce binary goods. Or is there? In a typical cube farm, how many times do you go to a meeting? How many times do you search for an open huddle room? I have worked in places where people would spend 15 minutes, 30 minutes, sometimes an hour negotiating with their peers to secure a conference room. At another, managers would walk pieces of paper around to get physical signatures for permission to release a product into production.

There is always the teleconference. I don’t need to tell you that co-location, sitting team members next to each other so they can freely talk, is less complicated and expensive than having fast computers and a robust network infrastructure for webcam interactions. If you have to peek over a cubicle wall, or worse pick up a phone and dial someone to ask their opinion, there will be times where that seems like too much of a bother — and a chance to share a critical insight is wasted.

This is not to say that distributed teams and these technologies don’t have their place. Sometimes the right people for a team don’t live in the same city. And meeting rooms for a thousand people are prohibitively expensive. Clearly, though, there is a trade-off for all of these pieces of infrastructure, and if you are paying for portions of that infrastructure that are unused or underutilized, that’s a form of waste. There is a reason many coaches like me advocate co-location: it reduces the cost of sharing ideas.

Even if you don’t want to do this all of the time, periodic use can be beneficial. Providing communal work space is a good idea, even though poorly-designed bullpens can introduce challenges from lack of personal space. I know some teams that book a conference room and work there together for an afternoon a week.

Another example is one of the mainstays of scrum: the daily scrum, a common time and place for the team to gather, share their triumphs and fumbles, and for them to discover obstacles to success. And many teams will make use of a “war room” when it’s critical for all the team to be focused on the same thing. I tell my teams, when things get tough, get closer.

Even in applications, there is wasted motion. Think about inefficient user interfaces where common functions take several clicks. Attention to user experience can reduce wasted motion. I won an innovation award early in my career by creating an adaptive wizard for data entry and making JavaScript improvements to page responsiveness. Data entry of a contract went from 20 minutes down to 5, which was an incredible time savings for the application’s customers. Our last installment will address overproduction and overprocessing.


Lean Wastes and Software Delivery: Wasted Inventory

Share:        

inventory

In manufacturing, any piece of work that is ready for the next step might be considered inventory: finished goods, works in progress, and even raw material.

For a software delivery team, it’s hard to measure inventory. For some teams, they are done with a feature when it is checked into source control. For others, they are done when they have delivered build artifacts. A team’s definition of done comes into play here. A definition of done can be thought of as a list of characteristics of completed work. One common example is that all new functionality is covered by unit tests. If Team Bravo tells me a feature is done, then I can assume that if I ask to see unit tests that exercise the new feature, Team Bravo can point me to them.

Whether the finished product is code or binaries, until that deliverable is in the hands of customers (internal or external), the producer earns nothing from producing it and thus that code has a potential to be waste. Firms don’t pay teams to produce code, they need their customers to pay for the features that are realized by that running code. And I’ve been on teams producing a feature that was cancelled or shelved before delivery.

Before the advent of distributed version control systems like Git and Mercurial, source control systems used to store a full copy of each file being monitored. In systems like those, branching was expensive in terms of time and disk space. Keeping dorment branches beyond their usefulness was a form of waste. Even using Git, I recommend that teams utilize tags and remove inactive branches, because branches are easy to recreate.

Still, having seen the contents of many of my colleague’s “temp” folders, teams need to be disciplined about cleaning build and work artifacts — for example, sample production data used to diagnose an issue solved months ago. Often these files can be huge, which makes them poor candidates for a version control system. Teams should take the time to set up a shared file system for such files, so that a hard drive failure on Keith’s machine won’t start a panic. Free options like Box, Dropbox and SugarSync might suffice for small shops, enterprise solutions like Amazon S3 also exist.

Developers also experiment on their machines, and it’s easy for those machines to become tainted by unusual software installations as well as experimental and debug versions of code libraries. Having a continuous integration system can help protect against unwitting building a environmental dependency into your software. I say from experience that you don’t want to be a position where you need to keep a departed developer’s machine up and running on the network because it’s the only place a piece of production code will compile.

For every group that produces an output (code library, software package, what have you), they require a set of inputs. Software is built from raw materials, but instead of physical items programs are built from ideas. Backlog grooming is analogous to a crude oil refinery, substituting ideas for crude oil and requirements for gasoline.

If a team does not have good discipline around its backlog, much like those “temp” folders above, there will be items in the backlog that won’t see the light of day in the near future. Ideas that are no longer relevant, or as some say have been “overtaken by events”. Ones that aren’t commercially viable. Defects that don’t apply anymore. If the backlog gets months long, you may find that people create new stories and features that duplicate ones already in the backlog.

Whether it’s the team that decides what to do next or a product owner, each of those items needs to be reevaluated. This accumulated time reviewing stale features is wasteful. Teams should strive to keep their backlogs trim. Really, as long as there is always one single thing more to do, the team can continue working, so teams should keep their backlogs as small as possible without running dry. A kanban board can be of immense help in planning workflows like this.

Next installment is wasted motion and transportation.


Lean Wastes and Software Delivery: Wasted Talent

Share:        

talent

It’s easy to find examples of wasted talent, which comes from not making full use of a person’s talent, skill or knowledge.

I’ve also seen this in firms where work assignments are not fungible. For example, imagine a shop where most of the software is written in Java, but some is written in Ruby from an acquisition. Let’s say there are 15 Java programmers and 5 Ruby programmers, and neither group is particularly familiar with the other language.

One day, the company publishes its quarterly roadmap. On it, there is a single feature for a Ruby product and ten features for Java products. The Ruby developers are left with a choice of learning Java or lying fallow. Either choice leads to individuals not utilizing their Ruby skills, which is a waste.

One technique that can help with wasted talent is paired programming. The effect of pairing is skills or knowledge transfer, so use of paired programming can help the Ruby programmer learn Java and those Java products more quickly. While pairing is often slower than solo programming, it does leads to fewer defects and allows both programmers’ talents to be used — albeit at a lesser capacity at first.

I saw this waste occur when a firm hired a quality engineer who is skilled at test automation, but was instead tasked with manual execution of test cases. In particular, the regression suite of the legacy product took days to walk through, and the test teams received code too late to have adequate time to execute the full suite.

Test automation is an investment in the product, and it’s not free. Even for a greenfield application, it will often take more time to write an automated test than to execute it manually. Of course, once the test is automated, it can be executed in a fraction of the time, which pays dividends.

Pairing in this case was challenging because many of the manual testers did not know how to program. And automation of the regression test scripts right before a release is not an ideal classroom setting. A coding dojo would be a better environment to convey the power of test automation.

Coding dojos are collaborative learning environments where people can acquire new skills. For example, there is an exercise (“kata”) called the Bowling Game that’s used to teach test-first development (TDD). Participants in the kata, even ones who haven’t programmed before, learn the exercise by rote to start with. Questions leads to learning programming concepts as students are ready. There are different katas and sample applications available on the internet that would make a suitable subject for test automation.

Sadly in this case, the firm did not want to make that investment. Predictably, the test automation engineer found a position elsewhere because he found the work unsatisfying.

Next time, we’ll look at wasted inventory.


Example #47 of Why Programmers Struggle to Talk to Humans

Share:        

My friend and fellow programmer Josh just walked over to my cubicle and asked if I had any disinfecting wipes. I opened my drawer, looked at the shelve, said “yes” and closed the drawer.

He looked annoyed for a moment, until I said, “I believe in Command/Query separation. He smiled and then said, “May I have one, please?”

Command-query responsibility separation (CQRS) is a good data access pattern for complicated object domains, in which the way you search or query for data is kept separate from the way that you modify or run commands against the data. When you are just getting data, you don’t need to worry about data integrity rules. In this case, I kept Josh from both asking whether I had disinfecting wipes and asking to have one in the same sentence.

However, I discovered today it’s not a good pattern for human interaction. For simple domains like the everyday world, it’s probably enough just to ask for one.


Lean Wastes and Software Delivery: Waiting and Defects

Share:        

Lean thinking comes from innovation in statistical process control. The idea is that honing techniques leads to measurably better results. Lean identifies eight “wastes”, activities that subtract value from a process. They are:

  • wasted talent
  • wasted inventory
  • wasted motion of people
  • wasted time (waiting)
  • wasted transportation of goods
  • defects
  • overproduction
  • overprocessing

From the perspective of a software delivery project, some of these wastes don’t seem to apply. In this series of posts let’s focus on each in turn, starting with waiting and defects.

Waiting and Defects

I hate waiting.

— Inigo Montoya, The Princess Bride

When people think of lean processes, the waste of waiting is usually what they have in mind. In software delivery processes, it can manifest in a number of forms. In software delivery, it turns out that techniques that reduce waiting can also reduce defects. Defects occur when software doesn’t behave as intended.

When a delivery team member joins the team, inefficiencies in the onboarding process can introduce waste. If they are also joining the firm, the costs steepen. Every moment that the new person spends not adding value to the product is waste from this perspective. Thus, it’s important for the new person to be integrated into the team as quickly as possible. I recommend paired programming to minimize this kind of waste. Shadow a person who’s role you’ll be filling. See what they do and don’t do, and learn the new codebase like you would a foreign language, through immersion.

Okay, so all of the team is onboard. The next step is to look at your product delivery process for bottlenecks. In a lean process context, a bottleneck is a part of the process where flow is constricted. While there may be a number of inefficiencies in a system, the most severe bottleneck will constrict the entire system. Troubleshooting bottlenecks is beyond the scope of this post, but one of my favorite guides to the process is the Universal Troubleshooter’s Guide.

One of the many examples given is one of a bicycle, with the goal of improving your commute time of two hours. If you have a $75 coaster break bike, what keeps you from going faster — the bottleneck of the system — is having just one gear, which is poorly suited to hills.

For $150, you might get a mountain bike with multiple speeds that gives you a 20% faster commute. Having removed the first bottleneck, the next limitation is wind buffeting from an inferior riding posture. A road bike for $300 can solve that bottleneck, which offers specialized tires and a better riding posture, yielding a 17% commute time improvement.

However, there is a point of diminishing returns. The next step up would be a mid-grade road bike, which overcomes the limitations of the entry-level components, but it only offers a 5% improvement for double the cost at $600. Other approaches like training or riding clothes would likely yield better results, which implies that at this point bike components aren’t the bottleneck.

With the bike example in mind, think about your software delivery process. From idea to requirements to design to implementation to testing to customer validation, where is the point that work tends to pile up? That point is likely to be the bottleneck. Lean process has a lot of good guidance on finding and removing bottlenecks, and it’s worth exploring even if you are using another agile methodology like scrum. Once you identify the bottleneck, there are techniques that can increase delivery speed at each stage:

Stage Techniques
Idea to requirements Backlog grooming, domain-specific language (DSL), working agreements
Requirements to design Prototyping, behavior-driven development (BDD), domain-driven design (DDD), unified modeling language (UML)
Design to implementation Paired programming, continuous integration (CI), version-control system (VCS), test-first development (TDD)
Implementation to delivery Automated acceptance testing, continuous deployment

Notes: “Paired programming” is usually “pair programming” though I prefer “paired” because paired is a better adjectival descriptive of the activity than the verb “to pair”. Similarly, TDD is usually “test-driven” but I prefer “test-first” because it more accurately sums up the philosophy of writing software guided by tests.

Backlog grooming allows a product owner to engage a delivery team during story creation. These sessions offer teams a separation between communicating what the work is and deciding how to accomplish the work. Product owner can use the feedback on the newly-drafted stories to refine the stories, address the team’s questions and concerns, and use the team’s rough size estimates for release planning. Without backlog grooming, these “what” and “how” discussions both happen during iteration planning. I’ve found mixing the two discussions to be inefficient, because the variations in design and other aspects of the “how” are much larger if team members have different concepts of what’s being requested, and thus it takes longer to reach concensus.

Product owners and teams need a common trade language to bridge the gap between business requirements and technical requirements, and the best tool I’ve found for this is behavior-driven development or BDD. I prefer Cucumber as a framework, which forms a domain-specific language (DSL) or jargon for communicating requirements. Each feature and requirement are expressed in a certain format, which programmers can easily map to automated tests.

This idea of a DSL can be extended further to the problem domain, which brings up a number of concepts from domain-driven design, especially ubiquitous language. It’s very helpful if the business experts use terms the same way as the technical staff. It helps them both understand the problem space better. While it’s true that domain-driven design includes a number of tactical technical patterns, it also contains a number of strategic approaches for modeling domains that are of use to both business experts and delivery technicians.

The same holds true of technical diagrams. Since there are so many ways to visually communicate information, it can take time and effort to communicate how symbols are being used. Use of a common standard like the unified modeling language (UML) can alleviate this. For people who know the language, the individual symbols fade into the background and the focus because the message they intend to convey.

Just as a ubiquitous language and UML can help smooth communications, working agreements can help teams make decisions ahead of time. Common working agreements include a definition of done and a definition of ready. For example, as part of a definition of done, if all team members accept that a story isn’t ready to code until the acceptance criteria are defined — the things the product owner expects to be true when the story is ready for acceptance — then that’s one decision that doesn’t need to be made with every story. The conversation shifts from “hey, do we need acceptance criteria on this one?” to “how will we show you that we’re done with this story”. Rehashing decisions over and over is a form of waste. Teams should only revisit these agreements once in a great while or when something changes.

Prototypes are a 21st Century version of the saying, “a picture is worth a thousand words”. When exploring new applications, new technical infrastructure or new approaches, prototypes can be very useful. They offer an environment for engineers to learn how best to leverage technologies to achieve a goal. Prototypes can also be useful to business experts, who can get some tactile feedback on their proposed workflows. I know teams that use prototypes even for established applications to verify their understanding of the requirements.

When working in a language that is as abstract as a computer programming language, even the most expressive code can be challenging. Paired programming is the most direct way for one delivery team member to communicate low-level technical implementation details with another one. Although it can be slower to implement a feature using paired programming than with a solitary programmer, studies have shown that the code produced by a pair contains fewer defects and that the majority of defects are found during code reviews. For a pair of programmers, code reviews occur naturally and continuously. Because mistakes are least expensive to fix while the code is being written, the overall time to produce quality code is reduced by paired programming.

Sitting next to another developer isn’t a luxury everyone can enjoy. I know a number of developers who pair remotely using a virtual network computing (VNC) tool like WinVNC and a telephone or get a second opinion with a screen-sharing tool like join.me or Skype. Many developers don’t see it paired programming as a luxury, but as a burden. For those uncomfortable with the concept, a weaker alternative that provides many of the benefits is an interactive code review in a conference room or via software as above. For distributed teams, or teams in a meeting-heavy work environment, a source control tool like Git with support for pull requests can provide a forum for asynchronous code reviews.

Continuous integration (CI) describes a practical approach of coordinating the work of multiple developers through the use of a common code location. It’s advantageous when that code location can track changes, which is why most developers make use of a version control system (VCS) like Subversion or Git. Originally made to help coders collaborate, use of VCS has expanded to book authors and even lawmakers.

While a VCS can help contributors understand the evolution of a work product, it doesn’t include the ability to know whether the work product changes are productive. A CI tool like Jenkins or TeamCity automates the process of watching for changes and taking action, usually building the software and running automated tests. However, a CI server can be used for other tasks, like compiling code quality reports and building documentation as well.

One of the more popular uses is for continuous deployment, in which build artifacts that pass automated tests are automatically deployed to a test environment for manual inspection. Or they set up deployment at the push of a button, so that long-running tests won’t be disturbed by an untimely deployment. And, in a world where it’s easy to deploy changes or one where there’s a ready failover environment, companies can simply automatically deploy to production, knowing that they can easily revert if need be.

Testing and Defects Reduction

Testing does not improve speed to market, a key reason why delivery teams sometimes sacrifice testing. In most cases, though, testing is an important discipline for agile teams, because quality code will improve chances for longevity in the market. Here, we are talking about two primary kinds of testing: unit testing and automated acceptance testing. Both kinds of testing are about delivering the right thing to customer, not about delivering it faster.

Test-first development is a tactical software development approach that ensures the programmer is the first user of the code being written. Each unit of code is exercised by I find that it quickly exposes potential design issues early, especially when it comes to consuming objects.

In writing these “unit” tests, I sometimes find myself writing boilerplate code or having to always use certain objects in conjunction, both of which can indicate beneficial refactoring. I sometimes find that I’m contemplating using an object in a new or different way, which can indicate a design flaw. I let my intuition be my guide — there is a difference between not writing tests because they are uninteresting (property getters and setters come to mind) and not writing tests because they are hard to orchestrate. I aim for 80% code coverage by tests, which gives me the benefits of TDD without imposing a dogmatic requirement to write uninteresting tests.

Acceptance testing is closely related to unit tests, but for many, it’s a separate discipline. Not only do developers want to know that their new code is working correctly, they want to make sure it integrates with existing code or systems. These tests have a different audience, so they should be considered separately. Whole books have been written on this subject, please see them if you want strategies for agile testing, but it’s worth noting that troubles found during deployment or integration are still less expensive to fix than after shipment to the customer. Accordingly, it’s worth some effort to reduce defect escape rates with acceptance testing. For people who practice BDD, their Cucumber specifications can easily be implemented as acceptance tests.

While tools like FitNesse and Cucumber can help with acceptance testing, a CI tool can be used to run them automatically. This allows software development teams to develop a “car dashboard” of their code health, letting them know whether it’s safe to proceed or time to pull into a service station. Many teams create working agreements for how to handle failing builds or broken automated tests.

Sometimes defects occur because a piece of software is working but is hard to use. User experience can help alleviate common data entry mistakes like clicking on the wrong button when two are placed too close together, use of confusing labels, obscured functionality hidden in configuration menus and the like.

Next installment, we’ll examine wasted talent.