I know Sass isn't the only game in town when it comes to higher-order CSS styling languages. However, it's the one supported by the Jekyll theme I'm using, so I've been dabbling in it.

One thing I love is the color functions that Sass provides. They make it really easy to stick with a color scheme, because you can use them to style your site with just a few colors without having to manually calculate RGB values. Jekyll is a static site generator, so these values get calculated at compile time. For example, it allows me to define my blog's color scheme like so:

$page-title-color: goldenrod;
$page-title-outline: darken($page-title-color, 20%);

$text-color: black;
$caption-color: lighten($text-color, 50%);

$brand-color: #44eeff;
$link-color: darken(desaturate($brand-color, 50%), 20%);
$pullquote-color: adjust-hue($link-color, 20deg);
You can do more with Sass than just change the brightness of colors.

The lighten and darken functions behave as you would expect, making the color brighter or dimmer. Since this is the "neon" tapir, my base color is rather bright. I use the desaturate function to keep the link color more closely resembling the brand color, then I darken the result for readability. For the pull quote color, I wanted it to stand out some, so I used the adjust-hue function to go a certain distance around the color wheel. In my case, it turned a blue-green link color into a purer blue.

I figured out these values through trial and error, by reading through documentation and trying out individual functions on their own. However, I found a site that provides an interactive guide, A visual guide to Sass and Compass Color Functions. The site only offers a color picker, so I couldn't get quite the exact hue I wanted. Nevertheless, a site like this greatly speeds up the process.

For example, I didn't see the application of the shade function to my color scheme until I started looking at this site. However, I found in experimenting that some of these color functions return results that can't be used by other ones. For example, lighten(shade($color, 20%), 10%) throws an error in one of the Jekyll converters.

The site showed me the effects of various functions, as well as allowing me to tweak the values without having to rebuild my site to see the changes. Had I used this site originally, I feel sure I would have ditched the neon green secondary coloring much faster. I think the goldenrod makes for a more readable design while still fitting in with the "neon" theme. Let me know what you think.

I had a problem. I found that I was having trouble tracking items. Sure, I would get email notifications, pop-ups on the web, notifications from my email client and on my phone. And yet I was ignoring them all. Even with excellent products like Todoist and Google Calendar, I found that my digital tracking system would consistently fall out of date.

At first, I thought it was a matter of discipline. But I quickly discovered that it wasn't a lack of willpower. Despite my commitment to check off items each day, I would encounter higher priority items than doing the check-in. I put living my life ahead of tracking it. And yet, I would sometimes have to scramble to complete coursework for my MBA or I would have to apologize to my wife for not completing a honey-do item on time. And I loathed it.

I have taken a number of short trips this spring and summer, mostly for weddings. However, it was on a trip to Eureka Springs, Arkansas that I figured out the issue. I was there to see my son perform in Opera in the Ozarks. Eureka Springs is an Ozark mountain town. It is an hour-long drive on twisty, single lane roads to the nearest city, "Northwest Arkansas". At least that's what the census calls it. Locals refer to the individual towns in this metropolitan area of 500,000 people, with the most recognized probably being Fayetteville.

In Eureka Springs, the Internet is not a given.

And that's when my digital life began to fall apart. I've been somewhat dissatisfied with my phone and biding my time until I can upgrade. With no connection to the outside world, it became even more useless. Often, I left it in the car and carried my point-and-shoot camera along instead.

As I sat and read my book, I realized that my digital life was failing me because it wasn't available when I wanted it. I don't use my electronics early in the morning or especially late at night. I thought back to when I was last successful at keep my backlog straight and it was with a paper journal. I'd tried using a day-timer for school when the problem first manifested, but I quickly dropped it. It was so much effort to maintain.

Then, I found out about bullet journaling. I immediately felt at home, because it reminded me of the customer support logs I used to keep early in my career. The format itself is pretty flexible and customizable, which also was appealing. Adoption was swift and it felt natural. I knew I'd found a good solution. In a future post, I'll talk about how I use the system. For now, ponder whether your current system is serving you, or you are serving it.

As an experiment, I recently recorded a lunch and learn session at work. The other R&D offices post theirs online, so I thought I'd do the same. On my work machine, my choice of tools is limited. As I researched the subject, I learned some good techniques and wanted to share.

Our office has a GoPro, but its microphone doesn't record sound well, so I brought in my decade-old SD camcorder. The raw footage formed a "minimal viable product". I pointed the camera at the slideshow, made sure the speaker Paul was in the frame, and recorded for 50 minutes. The slides are legible, and you can make out what Paul is saying.

In retrospect, I wish I had a fill light on Paul. The white slides washed him out, and the age of the camera meant that the camera couldn't cope. The colors were washed out, and Paul is a shadowy figure in the corner of the frame. The camcorder uses a proprietary .MOD file format and the raw footage was 1.8 GB, too bulky to share.

Aside from compressing it, I felt there were a couple of small things I could do to enhance the quality of the video.


Today I earn my Master's in Business Administration! I could not have succeeded in this program with the support of my wife Candy and my family. And I thank God for the opportunities this degree program gave me and for blessing me with the strength to see it through.

To celebrate, I wanted to share with you a vignette I found on my hard drive while doing some maintenance. I wrote this in June 2013, when I had just decided to get an MBA. I didn't post it then, but it seemed apropos.

Early in my journey as a software engineer, I was put in charge of maintaining my first enterprise application. It was a web application used by remote sales personnel to sell custom-built houses. I partnered with a talented product manager, who was supportive of the product's technical needs while advancing the product in the marketplace. Together, we took a defect-ridden product that was hard to modify and nail-biting to deploy and turned it into the envy of the department: no known defects and features were turned around in a matter of days.

This early success gave me a passion for craftsmanship. I studied design patterns, enterprise architecture, and began working on larger and more complicated projects in a number of vertical markets. At one point, I was principal architect over a large application with several teams looking to me for direction. And while I've become a respected software engineer in the Denver community, I also learned that there's more needed than just technical excellence to building software.

As a result, I studied software delivery processes. I became very interested in effective ways to orchestrate different job roles to make software happen. I led agile delivery teams. I got a certification in Scrum project management. And in the process, I found another passion, for process. I started volunteering for the local community organization Agile Denver. I became the registration director for their annual conferences as well as the technical and quality track manager. This year, I spoke at the Mile High Agile conference about how software design patterns can be found in everyday life.

I think a Master's of Business Administration is a logical next step for my career. As an individual contributor, my ability to affect change is limited to my ability to influence the managers, directors and executives in charge of both technology and product direction. Like my discovery before with craftsmanship, I find now that there's more to business than efficient delivery. By learning the business of running a business, I will be in an excellent position to marry my passions for craftsmanship and delivery process to help companies achieve their full potential. Degree in hand, I see myself developing a personal consulting firm or pursuing a leadership position with an established company.

My earliest post is over 8 years old now (April 2016), so it's taken several sessions here and there to do it, but I've converted all my content from HTML to Markdown. I spot-checked my work, but if you see an error, please let me know.

I recently wrote about Cognitive Communication Coaching for Engineers, a talk a colleague gave on presentation techniques. Today, I'm going to apply those concepts to the two keynotes at Mile High Agile 2016 by Jurgen Appelo and Michael Feathers.

Jurgen Appelo's talk was called "Managing for Happiness". He had the audience stand, and then asked three questions. If the answer was false for you, you sat down. It turns out only a very small fraction of people are happy in their jobs. He shared 12 practices for being happy. He then introduced three principles that helped him as a CTO apply these practices:

  • Run experiments, not frameworks
  • Manage systems, not people
  • Focus on progress, not on happiness

These three were his walk-aways. The bulk of the talk focused on a handful of techniques from his Managing for Happiness book (formerly #Workout). He used simple illustrations and anecdotes to communicate his internal map and connect with us. He explained thoroughly enough that we could use these techniques without reading his book (which is quite good). After each technique, he asked us to point to a slide with the three principles and say which one it fit best into. This was an excellent example of a dolphin map.

Michael Feathers spoke at lunch on "A Technical Keynote?". The talk started out strong, but it did not use the techniques Tom taught. It wasn't clear to us in the audience what we were supposed to take away from his talk. The lack of dolphin map became noticeable toward the latter half of the talk, in which I observed a number of people fidgeting and on their cell phones. The tragedy of this talk was that Feathers didn't spend a lot of time communicating his internal model. He mentioned several things in passing that I wanted to hear more about.

I wish he would have introduced half the concepts and gone into greater depth. A glaring example was a tool for visualizing technical debt called CodeCity. People tried to interrupt the keynote to hear more, and it was brought up twice in the Q&A afterward. Feathers tried to distance himself from the tool, saying that it can be hard to interpret and that he wasn't sure of its overall value.

I left lunch with no doubt that Feathers is brilliant, but we in the audience had a hard time following him through his keynote because he didn't use the Cognitive Communication techniques. Appelo's talk, on the other hand, was charming and approachable, in part because of the presence of those techniques. For me, applying those Cognitive Communication concepts to these two talks validated that it's a beneficial set of techniques for my toolbox.

Recently a company called Boom announced that they were creating a supersonic aircraft to replace the Concorde, which hasn't been in regular use since 2003. This called to mind a presentation slide deck I read last year talking about performance that referenced the Concorde.

The thrust of the presentation is that when we speak about performance, there are really two metrics we are conflating. One is the time to do the task, which is measured as execution time or response time. We'll call that one latency.

The other metric is the rate at which work is done, measured in terms of items per time frame. Oranges eaten per hour, for example. This is often called throughput or bandwidth.

It's often the case that latency and throughput are in opposition. Let's examine the Boom, the Concorde, and the Boeing 747.

First, latency. Considering performance, we want latency to be as low as possible. The Boom's top speed will be about 1,450 miles per hour, 100 miles per hour faster than the Concorde. A Boeing 747 can only manage about 610 mph. That makes the Boom 2.4 times faster than a 747 (1450/610) and 1.1 times faster than a Concorde (1450/1350). Therefore, a Boeing 747 has much higher latency than its supersonic competitors, taking roughly 6 hours to fly the 3,459 miles from New York and London compared to 2.5 for the Concorde or Boom. (The Boom arrives about 10 minutes ahead of the Concorde.)

However, let's now look at throughput. A Boeing 747 can hold 470 passengers. That makes its throughput 286,700 people-miles per hour (pmph). A Concorde can carry 132 passengers, so its throughput is only 178,200 pmph. In other words, it would take a Concorde 9 trips to convey the same number of passengers as a Boeing could in 5.

A Boom will seat 40 passengers, for a throughput of 58,000 pmph. It would take a Boom 25 trips to carry the same number of passengers as the 5 Boeing flights.

As you can see, latency and throughput can be very different. So, even thought the Boom will be an impressive technological achievement, don't look to the Boom to replace regular commercial 747 flights in the near future. To understand why, we need to look at money, specifically operating costs.

For ease of calculation, let's assume all other costs are equal except fuel. A Boeing 747 can carry 48,445 gallons of fuel. It consumes about a gallon a second of fuel, or 20,413 gallons from New York to London. The market rate for jet fuel is $5.99 per gallon at John F. Kennedy International Airport in New York. Therefore, it costs roughly $122,274 in fuel to fly a 747 from New York to London. With 470 passengers, that amounts to about $260 per passenger.

For the Concorde, its tanks hold 26,286 gallons of fuel. It burns 5638 gallons per hour, which equates to 1.57 gallons per second. But, because of its flight speed, at the same $5.99 price, it only costs $55,252 to fuel a Concorde to fly from New York to London. With fewer passengers, the cost is $460 per passenger.

It's hard to do the same calculation for the Boom, since its technical specification aren't as well known. However, if it's fuel consumption is similar to the Concorde, it would cost $51,441 to fuel it, or $1,285 per passenger.

Fare determination is complicated, so let's simplify and say it costs $750 to fly from New York to London. One-way tickets on the Concorde were $1113 in 1980, which is about $3,400 in today's dollars using an inflation calculator, although people more knowledgable than me say $5,000 is probably more accurate. That would make the cost of a Boom ticket at least $6,000 to account for the fuel difference.

For most people, spending $5,250 to save 3.5 hours of travel is not worth it. But for some people, it definitely might. It's worth noting here that the 40-passenger capacity works to the Boom's advantage here. It's a lot easier to find 40 people willing to spend $6,000 to fly than 132 willing to spend $5,000. The lack of demand is what ended the Concorde program. The economics of the Boom could make operating the Boom profitable in the niche market of supersonic travel.

In this post, I talked about two different metrics that measure performance: latency and throughput. I also talked about the economics of a supersonic jet and concluded that the Boom might succeed where the Concorde failed. If this kind of math interests you, I recommend you have a look at the presentation that inspired this post. It moves from airplanes to talk about computing power and talks about Amdahl's law, which is a tool for measuring the impact of improving performance of using parallel computer processing.

Today, I want to talk about presentation techniques I learned last week during an excellent lunch talk given by a colleague, Tom Margolis. The talk was titled, "Cognitive Communication Coaching for Engineers". Tom was a high school teacher before he became a software engineer, so he's in a prime position to speak on this subject.

It turns out that because of how our brains work, people can only carry a certain level of cognitive load. It is as though our hippocampus is a post office. When people get exposed to too many ideas at a time, just like a post office getting too many letters, people tend to drop those ideas on the floor instead of remembering them. Tom's talk centered around three principles for presenting information in a way that people don't get overloaded: walk-aways, dolphin maps, and empathy.

I learned about the first principle, walk-aways, under the term "take-aways". A presentation should be designed so that the audience is expected to remember just a handful of key concepts at most. These concepts are known as "walk-aways", the ideas the listener should walk away with.

Tom told us that the presenter should focus on the walk-away. Any information that isn't relevant to the walk-away is fluff and should be discarded. This is not to say that simply stating an idea as fact is sufficient. The presenter needs to present enough data to support their point, in order to provide enough evidence to convince the audience.

I learned from Tom that when you as presenter choose to introduce that extra information, such as an anecdote or an example, it's important to let the audience know that they are not expected to remember it. After all, it's not the anecdote you tell that's the key, it's the walk-away itself. Hence, the audience can forget the anecdote as long as they remember your point. These three techniques (focus, fluff, and forget) allow the presenter to offer walk-aways in a way that's easy for the audience to digest.

The next principle is that of a "dolphin map". Dolphins swim by taking deep breaths and staying underwater for many minutes at a time. However, they surface their blowholes to breathe periodically to release carbon dioxide and to recharge their lungs with oxygen. A presenter should do the same for their audience, so they can release cognitive load and recharge for more.

Tom likened this to a map that a tour guide gives out. Even though the tour won't get lost with a guide, the map allows the people on the tour to orient themselves and to predict what's coming. The map relaxes people and allows them to focus on the sights without worrying about details like when the next bathroom break might be.

Tom suggests that presenters frame the details of each topic, whether the topic is the presentation itself or a particular walk-away. He said the details should be sandwiched between an introduction of the topic and a review. The introduction should explain how the topic fits in. The review should mention this fit again and offer the audience an opportunity to "take a breath" and ask questions.

This sandwich concept is a well-known technique, Aristotle was the one that developed the argument casserole recipe of "tell them what you will tell them, tell them, then tell them what you told them". However, Tom took this idea a step further. He used consistent iconography in his presentation so that it was easy for us to look at any slide and tell where we were in the presentation.

The last principle is that of empathy, and it's the one I found most interesting. Tom talked about the idea that each of us has an internal model of the world. Miscommunication abounds when two people have a different internal model. One person's model might be rich and detailed, whereas another's might be sparse and still forming. One may have different steps that come from one's upbringing, and another's might have different shapes owing to personal beliefs.

An effective presenter will keep this in mind when crafting their presentation. With each concept along the way, a presenter should communicate their internal model, going as far as to define important words or explaining how it came about. In this way, the presenter offers the audience a way to synchronize their internal model with the presenter's.

Tom gave the counter-example of "sticky note" communication. Sticky notes often contain shorthand and only make sense to the writer. They work because they assume a particular internal model. When writing a presentation, it's important to avoid this kind of "sticky note" communication, or you could lose your audience. Presenters should ask themselves at every turn, "can someone misunderstand this?" If so, the presenter can add or clarify information.

Tom also made a distinction between information and education. Information is akin to handing a person a object and saying, "here's my model". Education is the process of helping adjust another person's internal model. This difference comes into play in presentations, especially when planning exercises. Making mistakes with a guide is a good way to learning the process of doing something. It's not a good way of learning information.

Not only is it frustrating to adjust your internal model over and over again just to find out that you haven't matched the guide's model, but science shows that the brain connects ideas but doesn't disconnect them. The mechanism is a lot like the way ant make scent trails. Correct connections are the ones that are refreshed over and over, whereas the wrong ones fade over time. A guide can help minimize the number of wrong connections that are made, making it easier for the audience to find the right ones.

Tom talked about three techniques for improving presentations: walk-aways, dolphin maps, and empathy. In my next post, I'll talk about the two keynotes at Mile High Agile 2016, and how these techniques manifested in their presentations.

I'm in the home stretch for my MBA studies. I'm taking advantage of the online studies program that Colorado State University offers (go Rams!). Today, I wanted to talk about in-class discussions. Applications of these techniques aren't just limited to college courses. Companies could use this to talk about issues affecting multiple offices, too.

During my undergraduate days, there were courses in which I couldn't wait for the in-class discussion. These days, as an online student, I dread them. What's the difference?

The way that classes are made available in the MBA program is through use of a professional videographer, who controls cameras mounted throughout the room. They film live class sessions, focusing on the person talking or reaction shots. Overall, they do a superb job and it is an immersive experience. Not only that, but it's a watching experience I can have at my convenience.

However, when I find myself wanting to say something, the limits of the medium become clear. Class videos are a one-way transmission. If I were to watch live, I could send a message to the SC in the room (section coordinator, think teacher's assistant.) However, I time-shift my classes to match my schedule, so the session is well done and gone before I see it.

Professors recognize this, so they attempt to simulate this discussion. The rest of this post will compare a couple of these asynchronous discussion facilitation strategies. By asynchronous, I mean it in the software sense, that the discussion won't be held all at once. Instead, each student will contribute at a time of his or her choosing.

The naive approach is to require that students post in a discussion forum on the topic, and then reply to a certain number of other students' responses. This approach does not scale well. Typically, these MBA classes have hundreds of online students. Each time someone replies, it becomes increasingly difficult to contribute something original to the discussion. Therefore, for students like me that time-shift the lecture to the weekend, this setup is crushing. Reading through 273 posts in the hopes that no one has said the thing that came to mind when I saw the material brings me to the discussion with the wrong attitude. Students who post earlier are at a strong advantage here.

Some professors try to mitigate this risk by having students post in forums specific to their SC. This helps by reducing the number of students involved. With only several dozen students, this approach is manageable. However, it still poses the drawback that students can read other replies before posting their own. In fact, there's a noticeable drop in quality as the thread continues. Here, the best learning opportunities are also granted to early posters.

Some professors restrict viewing of the forum thread until you have posted. This does make it easier for later posters. However, it can lead to a lot of duplication and repeated comments, which make the forums less engaging.

ACTIV82LRN is an interactive learning package that creates an environment which has some advantages over a discussion forum. In a typical use case, the first activity is for a student to write a response to a prompt. Normally, they will choose from a set of options (agree/disagree, for example) and then write a detailed response. Then, the student will be shown a sampling of other students' responses. They will be asked to rate the most and least effective responses. The exercises I enjoyed the most were ones where I was asked to write up feedback on a given post. While this method provided the highest engagement, as far as I know, the interactions through the tool were only visible to the graders. I never received feedback given through the tool. Perhaps the "other student" responses were written by the professor.

Regardless, I found ACTIV82LRN far preferable to other discussion forum strategies. Overall, I think the most effective strategy would be real-time ("synchronous") small group discussions moderated by an SC, using a tool like Doodle to coordinate scheduling. However, that's a heavy burden on SCs, who would have to attend many of these small group conversations.

What do you think? If you had a geographically diverse group and wanted to foster a discussion like this, how would you approach it?

Not the Fruity logo, but I like grapefruit

Another tech review, this time a Ruby library. Fruity is a library for performance testing. You have probably heard of its better-known cousin, Benchmark.

When I'm interested in benchmarking, it's usually for small methods. I'll think of a different way to approach a problem, and then the thought will occur to me, "But is it performant?" Fruity is ideal for these kind of situations. But why might I want to import a library when Benchmark is built in?

Let's take a recent example. I was working through the Minesweeper exercise on Exercism.io. I was thinking if there was a better way to make sure my Minesweeper grid didn't have any illegal characters in it. The first way was to convert an array of characters into a string, then to use count to make sure the count of all unwanted characters was zero.

After I finished the exercise, I thought about another way. The Array class has an intersection operator &. If I used that, I wouldn't have to convert the array to a string and maybe save some cycles.

When I'm wanting a Ruby sandbox, I turn to Pry. I don't use most of Pry's features, but I like it over IRB for its better syntax highlighting.

First, some test data:

[9] pry(main)> xyzzy = ['x','y','z','z','y']
=> ["x", "y", "z", "z", "y"]
[10] pry(main)> good = ['+','|','*','1','2','3','4',' ','-']
=> ["+", "|", "*", "1", "2", "3", "4", " ", "-"]

Here's a performance comparison using Benchmark:

[15] pry(main)> n = 100000
=> 100000
[16] pry(main)> Benchmark.bm do |x|
[16] pry(main)*   x.report { xyzzy.join.count('^+|*1234 -') == 0 }
[16] pry(main)*   x.report { xyzzy & good == xyzzy }
[16] pry(main)* end
       user     system      total        real
   0.000000   0.000000   0.000000 (  0.000007)
   0.000000   0.000000   0.000000 (  0.000012)
=> [#<Benchmark::Tms:0x007fff0326ce38

Note that this output is a little cryptic. It does tell me that the first way is faster, but I need to do a little work. I had to think about the number of iterations I wanted for the test. And, I had to do some mental math to figure out the real answer I wanted, which is "which one is faster and by how much?"

Now, let's have a look at Fruity:

[11] pry(main)> compare do
[11] pry(main)*   count { xyzzy.join.count('^+|*1234 -') == 0 }
[11] pry(main)*   intersect { xyzzy & good == xyzzy }
[11] pry(main)* end
Running each test 2048 times. Test will take about 1 second.
count is faster than intersect by 3x ± 0.1

Notice a few things about Fruity:

  • There's less boilerplate code
  • I was able to give each algorithm a name
  • I didn't have to specify the number of iterations. Fruity came up with an intelligent guess.
  • Fruity reported its results in relative terms

A simple test like this highlights a different in approach between Fruity and other benchmarking tools. For more on that, take a look at the project's README on Github. In short, Fruity seeks to eliminate noise by not reporting insignificant differences.

Fruity can do more than just anonymous methods, as I've shown here. It can compare methods on a class or another kind of callable. It can also do comparisons with parameters.

In cases like mine, clarity trumps precision. It's the difference between asking LaForge and Data a question. I'd prefer some synthesis over a data dump (no pun intended). Next time you're wondering about performance in Ruby, give Fruity a try.