Shared: Is the Distinction Between Outcomes and Output Overdone?

In about half the conversations about agile, I hear someone make the point that “teams should focus on outcomes rather than outputs.”

It’s gotten to the point that I’m tired of hearing it, especially because I somewhat disagree with it.

I know that’s blasphemous to say, but hear me out. To start, let’s be clear about the difference between the two words.

An outcome is a result. It’s something a team or product has achieved. An output is something produced along the way. The output isn’t always important in and of itself, but it leads to the outcome.

Mike Cohn: Is the Distinction Between Outcomes and Output Overdone?

What a timely post by Mike Cohn (who I received product owner training from way back in the day – a great course by a great trainer and practitioner.) I was just harping on talking about this with our technical product management team at work the other day. My take was that we were too focused on outputs and spent little time focusing on outcomes. Examples of this behavior were spending most of our time discussing team velocities, sprint commitments vs. delivery, and celebrating success as completing something on time and on budget. None of these areas are bad to track. Team velocity is important to know for planning and as one indicator for determining team performance. Sprint commitments vs. delivery can be one input in determining the health of a team. Celebrating success when things are delivered on-time and on-budget is good, as long as that is not the only thing being measured and celebrated.

To Cohn’s point, outcomes are a trailing indicator, while outputs are a leading indicator. It would be foolish to only try to measure based on outcomes. We don’t need to pick “sides”. But overemphasizing outputs can lead to missing the real impact we’re trying to deliver with our software development efforts. I’ve seen this happen time and time again, especially in Agile/Lean software dev teams where they are optimized to deliver software but not necessarily measure the outcomes. The number one reason I hear for measuring outputs and not paying much attention to outcomes is that getting the data on outcomes is difficult. Gathering data on outcomes is normally difficult because it is not prioritized. We want to build the new feature, fix the bug, launch the new product and in order to do that ASAP, we remove “superfluous” items like ensuring we have ways to measure the outcomes of our work. This is a mistake.

Yes, there has been a lot of emphasis on measuring outcomes over outputs, and, yes, it has been overstated at times. However, my theory is that if you asked most agile software development teams how they measured the success of their work, a majority would have little to no outcome-based measurements. And the danger in placing so much emphasis on outputs over outcomes is that we deliver things that don’t really matter for our customers and businesses.

Put it on the blockchain (no, really)

My arsenal of blockchain jokes would seem limitless, so I’m as surprised as anyone that I’m sharing this – Garbage Pail Kids (GPK) on the blockchain. Regardless of one’s opinions about the value of “digital goods”, in 2019 $35B of digital goods were purchased in video games alone. That’s a lot of real $$$ for bits & bytes. A problem with most digital goods is that they are locked to one company, one proprietary platform, one proprietary marketplace (if any marketplace at all). Put digital goods on an open blockchain and it makes for some interesting possibilities. Topps, owners of the Garbage Pail Kids brand, have put out a version of digital GPK cards on the WAX blockchain. In the world of collectibles, Topps was limited in their ability to fully capitalize on the market for their collectible cards. For example, someone purchases a pack of cards, and Topps makes money on that sale. Now that same person sells some of the cards on eBay for a premium, Topps does not see any of that money. On the blockchain, digital cards are in the form of smart contracts, which allows Topps to program in the trading fees. For every digital GPK card traded, Topps can get X% of the trade (with X defined as part of the smart contract). While some may balk at this arrangement, arguing that they should not have to pay a fee to trade their property, the power in this smart contract trading fee is that it incentivizes Topps to keep the initial prices reasonable and inventories controlled. In addition to the economics that are enabled, there are the benefits of every transaction related to a GPK card being visible and immutable. The level of trust this builds is hard to achieve with real-world cards, where counterfeits and questions about production levels abound.

Beyond Bitcoin proving a decentralized digital currency is viable (no small feat!), Topps’ use of the WAX blockchain is one of the rare examples I’ve seen where the use of blockchain solves problems and creates new opportunities. Bonus points for digital GPK cards selling out FAST. Not even hurdles in the form of some of the clunky blockchain UX challenges can stop collectors from buying up their Adam Bombs and Fake Jakes.

Shared: Why I think GCP is better than AWS

Both AWS and GCP are very secure and you will be okay as long as are not careless in your design. However GCP for me has an edge in the sense that everything is encrypted by default. For example their buckets and their logs are encrypted in transit and at rest. For some bizarre reason AWS does not encrypt buckets or logs by default, you have to enable this. Who the hell would NOT want their data encrypted on AWS servers?

Fernando Villalba: Why I think GCP is better than AWS

When I saw this post come across I thought it was going to be some serious clickbait. Surprise, surprise – it’s anything but, filled with specific examples of where GCP is superior to AWS. The (sad) truth is that GCP has taken far too long to get to its current state. Google seemed to treat GCP as a side project, while Amazon created the market for cloud infrastructure. Regardless, there are many things Amazon could learn from GCP, including simple things like sane defaults, wrangling the mess that is their account system, and putting together a more coherent story for developers. -Josh

Shared: Do I Need an API Gateway if I Use a Service Mesh?

I believe the confusion arises because the following:

  • there is overlap in technologies used (proxies)
  • there is overlap in capabilities (traffic control, routing, metric collection, security/policy enforcement, etc.)
  • a belief that “service mesh” replaces API management
  • a misunderstanding of the capabilities of a service mesh
  • some service meshes have their own gateways

The last bullet is especially confusing to the discussion.

Christian Posta: Do I Need an API Gateway if I Use a Service Mesh?

Good post, especially relevant for my current work where we’re deep diving into Kubernetes/Istio land. Previously, I was working on products that had an API Gateway as a main component. I noticed right away a lot of overlap between what Istio provides and products like Apigee, Kong, etc. provide as API Gateways. Confusing. Christian’s post helps identify the overlaps while digging into the differences. -Josh

Shared: Only 15% of the Basecamp operations budget is spent on Ruby

For a company like Basecamp, you’d be mad to make your choice of programming language and web framework on anything but a determination of what’ll make your programmers the most motivated, happy, and productive. Whatever the cost, it’s worth it. It’s worth it on a pure cost/benefit, but, more importantly, it’s worth it in terms of human happiness and potential.

DHH: Only 15% of the Basecamp operations budget is spent on Ruby

True. A similar argument can be made when people debate Kubernetes vs. Serverless vs. PaaS vs. ??? In most cases, whichever approach you go with for running your apps/services, that’s just the tip of the iceberg. Data stores, file storage, messaging, networking, and more all need solutions and will often make up more of the financial (and operational) pie than your service/app layer. – Josh

Shared: Why I Listen to Podcasts at 1x Speed

On my microblog I mentioned that I always listen to podcasts at 1x speed.

Here’s why:

We’re in danger, I think, of treating everything as if it’s some measure of our productivity. Number of steps taken, emails replied-to, articles read, podcasts listened-to.

Brent Simmons: Why I Listen to Podcasts at 1x Speed

Same. It’s strange though, I listen to most audio books at around 1.5x speed. Podcasts are always played at 1x speed. I think this is mainly because I enjoy the regular cadence of the podcast conversations, but the audio books feel painfully slow in comparison, mainly because it’s usually just one narrator reading. If I felt a similar need to speed up podcasts, I don’t think I would listen to them as much as I do. – Josh

Shared: Reorganizing Product Teams

People want to know what worked well at other successful companies because there is an assumption that it might work well for them… this is understandable. But what worked at Spotify or Shopify or Stackify will not necessarily work for them.

And recently, a product leader at Spotify shared that her group (and many others throughout her company) have evolved to very different organizational models than described in Henrik Kniberg’s 2012 Scaling Agile @ Spotify.  I’ve repeatedly seen that cutting and pasting someone else’s organization ignores the hard retrospection about what’s not working (and what works well) at your own company.

– Rich Mironov’s Product Bytes: Reorganizing Product Teams

So much good advice in this article by Rich Mironov. His section on Professional Service organizations trying to build products is spot on. There is such a mindshift necessary in going from pro services to building products that most organizations can’t pull it off. Not to mention the practical issues that arise from focusing on a product that generates zero revenue and demands 100% focus from a product dev team that would be billing for their time if they weren’t working on the product. There is immense discipline required to pull this off.

Product Owner != Product Manager. Amen. I’ve been practicing Scrum for about 15 years now. I received training from one of the creators of Scrum, Ken Schwaber, and one of the go-to people for product owner training, Mike Cohn. Great training. And yet, most of the focus was on overcoming problems that internal IT departments face versus building full blown products that generate revenue. As much help as I think technical PROJECT management needs in various areas, technical PRODUCT management is in desperate need of help across the board. Too many people have the title/role with little training, mentoring, and/or coaching. For all the excellent resources available for Agile and lean project management and software development training, there is little (in comparison) for product management. The gap is felt by product development teams churning, with a common complaint about how “Product” is incapable or even incompotent. The problem is left unaddressed mainly because those in the Product org responsible for fixing it are in just as bad (if not worse) shape than those they hired. Rough. – Josh

Shared: The Blue Tape List

Having been through this experience many times, I’ve discovered that a simple fix is patience. In time, that which is different will feel normal. It’s why when a team member reports moderate concerns with a new hire that I gently always ask, “When did they start?” If the answer is less than two months, I suggest, “If it’s not heinous behavior, give it another month. They’re still adapting to a new environment, and we don’t know who they are.”

Rands in Repose: The Blue Tape List

Solid advice in this post. Adjusting to a new job or role takes time. We notice things that appear to be broken in these situations. Some things are broken, some are not. Keeping a list is good during this adjustment period. Revisit once some time (3 months is good guidance) has passed to determine what needs fixed, what needs improved, and what is actually OK to leave as it is, at least for now. – Josh

Shared: How to break the “senior engineer” career ceiling

Remember, your job is to help others become better versions of themselves, not to make them become you!
You need to show, not tell. And use your power of influence to level others up. Help other engineers improve their decision making and their ability to execute effectively. This can be slow and challenging and 80% of what you do is communication.

theburningmonk.com: How to break the “senior engineer” career ceiling

Moving from an individual contributor as a senior software engineer to a tech lead or principal engineer is not for the faint of heart. I’ve seen people struggle with it over the years in my career in tech. The biggest “gotcha” is when the person realizes they’re no longer measured based on their output but on their impact, and that impact often has little to do with what they were best at prior to the promotion. Making myself better is one thing. Making others better is next level and then some. – Josh

Shared: Crypto can’t scale because of consensus … yet Amazon DynamoDB does over 45 Million TPS

Infrastructure ignorance comes at a cost. Cryptocurrency developers largely ignore the fact that the cloud exists (certainly that managed services, the most modern incarnation of the cloud, exist). This is surprising, given that the vast majority of cryptocurrency nodes run in the cloud anyway, and it almost certainly accounts for at least some of the large differential in performance. In a system like DynamoDB you can count on the fact that every line of code has been optimized to run well in the cloud. Amazon retail is also a large user of serverless approaches in general, including DynamoDB, AWS Lambda, and other modern cloud services that wring performance and cost savings out of every transaction.

A Cloud Guru – Medium: Crypto can’t scale because of consensus … yet Amazon DynamoDB does over 45 Million TPS

Lots of interesting info on DynamoDB in comparison to blockchains. One big difference that I think the author downplays is that DynamoDB is a completely controlled and optimized database from one owner (Amazon), while the most popular blockchains like Bitcoin are open projects running in a completely open enironment. If DynamoDB was running across AWS, Azure, GCP, as well as less “corporate” data centers, and had Microsoft, Google, and miscellaneous individuals running DynamoDB nodes, would DyanmoDB performance take a hit? How big of one? And how comfortable would Amazon be with trusting that type of DynamoDB deployment?

Also, DynamoDB is not even close to being censorship resistant. This may seem like a minor point, but, as the recent events in Hong Kong remind us, censorship resistance can be very important. – Josh