Shared: Why I Listen to Podcasts at 1x Speed

On my microblog I mentioned that I always listen to podcasts at 1x speed.

Here’s why:

We’re in danger, I think, of treating everything as if it’s some measure of our productivity. Number of steps taken, emails replied-to, articles read, podcasts listened-to.

Brent Simmons: Why I Listen to Podcasts at 1x Speed

Same. It’s strange though, I listen to most audio books at around 1.5x speed. Podcasts are always played at 1x speed. I think this is mainly because I enjoy the regular cadence of the podcast conversations, but the audio books feel painfully slow in comparison, mainly because it’s usually just one narrator reading. If I felt a similar need to speed up podcasts, I don’t think I would listen to them as much as I do. – Josh

Shared: Reorganizing Product Teams

People want to know what worked well at other successful companies because there is an assumption that it might work well for them… this is understandable. But what worked at Spotify or Shopify or Stackify will not necessarily work for them.

And recently, a product leader at Spotify shared that her group (and many others throughout her company) have evolved to very different organizational models than described in Henrik Kniberg’s 2012 Scaling Agile @ Spotify.  I’ve repeatedly seen that cutting and pasting someone else’s organization ignores the hard retrospection about what’s not working (and what works well) at your own company.

– Rich Mironov’s Product Bytes: Reorganizing Product Teams

So much good advice in this article by Rich Mironov. His section on Professional Service organizations trying to build products is spot on. There is such a mindshift necessary in going from pro services to building products that most organizations can’t pull it off. Not to mention the practical issues that arise from focusing on a product that generates zero revenue and demands 100% focus from a product dev team that would be billing for their time if they weren’t working on the product. There is immense discipline required to pull this off.

Product Owner != Product Manager. Amen. I’ve been practicing Scrum for about 15 years now. I received training from one of the creators of Scrum, Ken Schwaber, and one of the go-to people for product owner training, Mike Cohn. Great training. And yet, most of the focus was on overcoming problems that internal IT departments face versus building full blown products that generate revenue. As much help as I think technical PROJECT management needs in various areas, technical PRODUCT management is in desperate need of help across the board. Too many people have the title/role with little training, mentoring, and/or coaching. For all the excellent resources available for Agile and lean project management and software development training, there is little (in comparison) for product management. The gap is felt by product development teams churning, with a common complaint about how “Product” is incapable or even incompotent. The problem is left unaddressed mainly because those in the Product org responsible for fixing it are in just as bad (if not worse) shape than those they hired. Rough. – Josh

Shared: The Blue Tape List

Having been through this experience many times, I’ve discovered that a simple fix is patience. In time, that which is different will feel normal. It’s why when a team member reports moderate concerns with a new hire that I gently always ask, “When did they start?” If the answer is less than two months, I suggest, “If it’s not heinous behavior, give it another month. They’re still adapting to a new environment, and we don’t know who they are.”

Rands in Repose: The Blue Tape List

Solid advice in this post. Adjusting to a new job or role takes time. We notice things that appear to be broken in these situations. Some things are broken, some are not. Keeping a list is good during this adjustment period. Revisit once some time (3 months is good guidance) has passed to determine what needs fixed, what needs improved, and what is actually OK to leave as it is, at least for now. – Josh

Shared: How to break the “senior engineer” career ceiling

Remember, your job is to help others become better versions of themselves, not to make them become you!
You need to show, not tell. And use your power of influence to level others up. Help other engineers improve their decision making and their ability to execute effectively. This can be slow and challenging and 80% of what you do is communication.

theburningmonk.com: How to break the “senior engineer” career ceiling

Moving from an individual contributor as a senior software engineer to a tech lead or principal engineer is not for the faint of heart. I’ve seen people struggle with it over the years in my career in tech. The biggest “gotcha” is when the person realizes they’re no longer measured based on their output but on their impact, and that impact often has little to do with what they were best at prior to the promotion. Making myself better is one thing. Making others better is next level and then some. – Josh

Shared: Crypto can’t scale because of consensus … yet Amazon DynamoDB does over 45 Million TPS

Infrastructure ignorance comes at a cost. Cryptocurrency developers largely ignore the fact that the cloud exists (certainly that managed services, the most modern incarnation of the cloud, exist). This is surprising, given that the vast majority of cryptocurrency nodes run in the cloud anyway, and it almost certainly accounts for at least some of the large differential in performance. In a system like DynamoDB you can count on the fact that every line of code has been optimized to run well in the cloud. Amazon retail is also a large user of serverless approaches in general, including DynamoDB, AWS Lambda, and other modern cloud services that wring performance and cost savings out of every transaction.

A Cloud Guru – Medium: Crypto can’t scale because of consensus … yet Amazon DynamoDB does over 45 Million TPS

Lots of interesting info on DynamoDB in comparison to blockchains. One big difference that I think the author downplays is that DynamoDB is a completely controlled and optimized database from one owner (Amazon), while the most popular blockchains like Bitcoin are open projects running in a completely open enironment. If DynamoDB was running across AWS, Azure, GCP, as well as less “corporate” data centers, and had Microsoft, Google, and miscellaneous individuals running DynamoDB nodes, would DyanmoDB performance take a hit? How big of one? And how comfortable would Amazon be with trusting that type of DynamoDB deployment?

Also, DynamoDB is not even close to being censorship resistant. This may seem like a minor point, but, as the recent events in Hong Kong remind us, censorship resistance can be very important. – Josh

Shared: This Feature Should Be Easy

It’s everything around the feature that makes it harder: UI design, localization, refactoring, accessibility, state restoration, getting new artwork (for a toolbar button, for instance), dealing with errors, testing, updating the documentation, etc.

inessential.com: This Feature Should Be Easy

If you want to make sure your feature request is dismissed or put at the bottom of the list, make comments about how easy it should be to implement it. Maybe it is easy. But, as Brent notes, it’s likely not so easy and saying it is can be perceived as condescending. – Josh

Wardley Maps for product development

Wardley Mapping is most commonly associated with higher level business strategy. When I’ve talked to people familiar with Wardley Mapping, they’ve been quick to dismiss it as a tool for the C-level suite, not for mere mortals developing products. I disagree. The following will attempt to explain why I find Wardley Maps not only relevant but vital to product development. If you’re unfamiliar with Wardley Maps, I recommend watching Simon Wardley’s 2017 Google Next talk for a good overview. If you want to take a deeper dive, I suggest Simon’s free online book.

Digital Beanie Baby: Husky
Welcome to the future. Digital Beanie Babies. Trade them on your favorite exchange!

Imagine. We’re transported to a time where it seems like anything with the label “beanie” is drawing attention, mostly because a new form of Digital Beanie Babies (DBB) has been skyrocketing in value for over 12 months. An entire ecosystem has sprouted up around the digital critters. In the midst of this beanie bonanza are exchanges launching to enable trading DBB, much like other exchanges exist for trading stocks, fiat currency (e.g. USD), etc. Each new DBB exchange has its strengths and weaknesses.

A few entrepreneurs identify an opportunity to create a new exchange that fills the gaps left by even the most successful DBB exchanges, while also adding a couple innovative features. They’re focused on meeting existing customer needs in a burgeoning market, not building a larger technical “platform” that has aspirations beyond being a very successful DBB exchange. The founders form their startup, get some technical people on board, build a proof of concept, raise funding, and continue building on the vision of providing a better DBB exchange for traders big and small.

But how does the startup know that they should build this product from the ground up?

Product development for this new exchange is focused on continuing the path the proof of concept set – build a secure, fast, trustworthy, and feature rich DBB exchange from the ground up. In such a new market space this seems reasonable. Put together a small, agile/lean development team and continue to build a product that meets the needs of its traders, validating the product as it is built. Lean startup style.

But how does the startup know that they should build this product from the ground up? Are there pieces they should buy? Higher level functional open source components they should adopt? The assumption is made that because DBB trading is a fairly new market it requires building a brand new product from scratch. Besides, many of the most successful DBB startups built their exchange from the ground up. They blazed a trail, this new startup is simply following in the path that’s cleared for them.

Enter Wardley Maps, a way to visualize the evolution of the underlying landscape anchored to the needs of users. When we map out the Digital Beanie Baby trading landscape, including high level components/requirements of a DBB exchange, we start to see that a number of core components of the exchange are available for purchase, leasing, and/or open source. After all, exchanges have been around quite a while in various forms.

Based on the product development approach the Beanie exchange startup is taking, we’d expect to see most of the exchange components and requirements in the “Custom-Built” or “Genesis/Novel” lane, but instead a majority of the core exchange features in development (highlighted in blue) fall into the “Product/Rental” lane or very close to it. What does this tell our startup? It’s telling them that the bulk of what they’re currently developing is something that has (probably) been built before and is available to buy, lease, or simply use (open source).

A common objection I hear at this point is: OK, but what if you’re wrong about the position of various items on the map? That’s quite possible. The potentially incorrect map positioning can drive meaningful discussions as well as further research to either validate or invalidate the position. The beauty with the map is that we’re: A) having the product/business strategy discussion based upon a shared visual aid and context B) able to identify our assumptions. In the case of our startup example, the biggest areas likely up for debate are core components (that they’re creating themselves) like the matching engine, order book, API, web app, etc. Maybe with some further digging the discovery is made that commercial offerings exist, but they’re immature, or expensive, or rigid, or any number of other issues that may make them less viable as options. Maybe we find that certain components are available as products, but are only really suitable for more traditional exchanges, not for those aimed at trading digital beanies. It’s also possible that we discover there are numerous existing components and products that fit our needs and validate the position of those items on the map.

The Wardley Map provides a higher level view that can help drive lower level product development decisions.

Another objection I’ve heard is that what I’m crediting to Wardley Maps is really just another path to market research that any business should already be doing. Possibly, but the map provides a shared context that market research does not commonly provide, as market research tends to end up buried in docs and slide decks. A group of people can look at a Wardley Map together and start to quickly identify areas of alignment, disagreement, opportunity, etc. Those with more industry expertise can be brought in to look at the assumptions on the map and determine if those assumptions are on track or need adjusted.

In our startup’s case, the map identified some potential blindspots in the product development approach. Should the exchange startup be building something from the ground up that is already available for purchase, rent, and/or adoption? Unless the devil is hiding in the details of those components mapped in the product lane, then it’s unlikely the startup should pour further (fairly limited) development resources into those items. 

It makes more sense for the startup to consider focusing its development time and talent on building items in the novel or custom lanes. Or, possibly focus the engineering team on implementing the items in the product lane, become experts on operating those in production (much sooner than the homegrown equivalent would be available), and then discover through real customer feedback what should be introduced on the exchange next. The possibilities the map opens up are much greater than what development teams tend to consider when they’re building a product, such as, should we: Scale back feature X, Y, Z? Reprioritize the backlog? “Pivot” to target only one subset of customer initially? Change our approach to development in order to increase velocity and lower time to market? Reset some or even all of our goals for the MVP? Those are all legitimate options to consider in product development, but they may be too nearsighted and ultimately limiting. A startup is particularly sensitive to this form of myopia, as the business and product the startup is building are so tightly coupled, it can be hard to determine where one ends and the other begins. The Wardley Map provides a higher level view that can help drive lower level product development decisions.

Adding Wardley Maps to a product development team’s toolbox is a valuable asset, even if at a first glance it’s hard to understand how. Mapping is not a silver bullet or even a tool that is immediately easy to grasp, but it’s worth learning. This example only begins to touch on all the nuances of Wardley Maps. There are many other nuggets of strategy gold to be found for those who take the time to learn how to create and collaborate with these maps.

Not about school

Consider this a parable of sorts for those building products. I’ll leave it up to the reader to determine its meaning.

Carl is a senior in high school and has a D in Algebra II heading into the last semester. He wants to bring his grade up to a B so that he can qualify for college scholarships. Quizzes and tests make up 80% of a student’s grade. The other 20% consists of completing homework (15%) and classroom work (5%). Carl’s Algebra II teacher, Mr. Smith, insists that Carl catch up on all his missing classroom work first, even though most of that work will not help on upcoming quizzes and tests. Mr. Smith works with his student to create a rather large yet well-defined backlog of all the classroom work Carl needs to catch up on so they can track his progress together. Carl diligently works on burning down that backlog. He finally catches up on the classroom work near the end of the school year. Mr. Smith pats him on the back. Carl’s final Algebra II grade is a D+.

A remote retrospective

It’s been a while since I last facilitated an all remote retrospective. Below is an email I sent out to the team I’m currently working with to help us prepare for our first “retro” together. We’ve since held the retro and it went well overall, so I thought this prep and guidance might be useful for others to iterate on.

—–

Hi All,

On Tuesday we’re going to have a Project Retrospective (aka “retro”). In my experience, retros are hugely beneficial opportunities for teams to learn and grow.

Please take the time to read the rest of this email. It outlines how the retro will run. In order to make the most of our time, it’s best to come prepared. 🙂

Retro Goals

  • Learning jointly – From each other’s different perspectives, feelings, and current thoughts about where the team is at since launching the project.
  • Taking action Based on learning together, we identify where we can most benefit from improvement and take action.
  • Strengthening the team – We’re in this together. By listening, learning, and taking action on what we’ve learned, we develop a stronger bond that is bigger than “just the work”.

Ground Rules

Regardless of what we discover, we understand and truly believe that everyone did the best job he or she could, given what was known at the time, his or her skills and abilities, the resources available, and the situation at hand.

  1. Be respectful
  2. Be present (no phones, no “side chats/conversations”, no browsing, etc.)
  3. Everyone gets a turn to speak
  4. No interrupting
  5. No judgement on feedback (use “I statements“)

Before The Retro (between now and when we meet)

Please take time between now and when we meet to think about the following in relation to your experience and/or what you observed during your time with the TGE project:

  • What you want us to continue doing
  • What you’d like us to start doing
  • What you’d like us to stop doing

You can add your items to this doc, which we’ll use as part of the retro: Google Doc

The Start

Check-in: We’ll go around the call and have everyone provide one word for how they’re feeling in the moment. Have trouble coming up with a word for how you’re feeling? I know I do sometimes! Try this to help identify a word: https://verbaliststravel.files.wordpress.com/2016/01/the-language-and-vocabulary-wheel-for-feelings-verbalists.jpg

The Middle

Part 1) We’ll use the “Before the Retro” section above to guide the initial discussion. What we want to: continue, start, and stop doing. If you haven’t added your items to each area, we’ll have a short time for everyone to add to each list.

Part 2) We’ll go through each item and open it up for questions/clarification. I will do my best to encourage open, focused discussion that honors our time.

Part 3) Everyone gets 3 votes per area to put against the items they think we should take action on now, not later. You can apply more than 1 vote to an item if you feel that strongly about it.

Part 4) Identify the top item in each area and determine next steps (which can be as simple as identifying an owner to drive the action post-retro). …But what about the other items – aren’t they important too? Yes! But focus is key. Just because an item isn’t made “top priority” doesn’t mean it won’t see progress in the future.

I’m accountable for ensuring the 3 items we identified to take action on make progress. Don’t expect miracles. Some things require a a fair amount of determination over a period of time to show results. My commitment is to continue to provide visibility to the items and help move them forward. Expect this to be part of our weekly meetings, even if it’s just a quick update.

The End

Check-out: Everyone on the call gets a brief (30 seconds or less) opportunity to express closing thoughts/feelings now that the retro is at a close.

Looking forward to learning and improving with you all!

Josh

Deploying a Python Flask web app on AWS Lambda

The best server is no server?

OK, servers are still involved with “serverless computing“, but not ones that you and I need to worry about maintaining and scaling. While serverless platforms like Amazon’s AWS Lambda, Google’s Cloud Functions, IBM’s OpenWhisk, Microsoft’s Azure Functions, and others aren’t the right fit for every need, I’m beginning to think they’re applicable to more apps and services than not.

I’ve been playing around with AWS Lambda for a little while now. After getting a number of functions up and running, I started to think about how this platform might be applicable for web apps. While I’ve built a Slack app using Node.js, I have to admit I missed the elegance of Python.

I settled on getting a little web app based on a Python web framework up and running on Lambda. I wanted the deployment to be as painless as possible (i.e. easily repeatable once configured). For this exercise I didn’t care about integrating with outside services. I was fine with running against a local SQLite database, knowing I could switch over to RDS or another database at another time. Here is what I ended up selecting for this task:

I chose Flask because it’s well regarded and simple to get running. I chose the Serverless framework (versus something like Zappa) because Serverless provides the ability to deploy Python and non-Python based projects quickly while also allowing you to deploy to other cloud providers like Google, Microsoft, etc. I chose the Flaskr app because it’s simple but not too simple.

Before we get started

There’s quite a bit that needs to be setup and running before my howto below will work for you. Here is what I expect to be running:

Also, my instructions assume you’re on Linux or Mac. If you’re on Windows, you’ll need to adjust the commands for your environment.

How to get it all running

  1. Setup Serverless and the Flaskr app locally
  2. Modify SQLite code to run in Lambda (and locally)
  3. Configure the Serverless deployment
  4. Deploy to AWS
  5. Remove from AWS
1. Setup Serverless and the Flaskr app locally

In a terminal session run the following:

# Install Serverless globally
npm install -g serverless

# Create a flaskr in our home directory and clone the flaskr project
mkdir ~/flaskr
cd ~/flaskr
git init
git remote add -f origin https://github.com/pallets/flask.git
git config core.sparseCheckout true
echo "examples/flaskr/" >> .git/info/sparse-checkout
git pull origin 0.12-maintenance

# Move the flaskr project out of the examples dir & get rid of the examples dir
mv examples/flaskr .
rm -r examples
cd flaskr

# Install the Serverless WSGI plugin locally
npm install --save serverless-wsgi

# Create the Serverless AWS Python template
serverless create --template aws-python

# Add Flask 0.12.2 as a requirement
echo "Flask==0.12.2" >> requirements.txt

# Make flaskr runnable
export FLASK_APP=flaskr
sudo -H pip install --editable .

# Initialize the database
flask initdb

These commands install the Serverless framework, downloads the Flaskr app from Github, sets up the app to run with Flask, and initializes a local SQLite database with the necessary table. If all goes well, you should be able to start up Flaskr locally:

flask run

In this case I can access http://127.0.0.1:5000/ in my browser and see something like this:

2. Modify SQLite code to run in Lambda (and locally)

We need to modify the flaskr/flaskr.py file in order to get SQLite to work in Lambda. In case it wasn’t already obvious, this setup is not meant for production. With that out of the way…

Modify the replace the app.config.update section with the following to configure the SQLite database file to be created in /tmp:

flaskr/flaskr.py

app.config.update(dict(
    DATABASE='/tmp/flaskr.db',
    DEBUG=True,
    SECRET_KEY='development key',
    USERNAME='admin',
    PASSWORD='default'
))

Then look further down in the file and modify the @app.route(‘/’) section with the following:

@app.route('/')
def show_entries():
    db = get_db()
    entries = {}
    try:
        cur = db.execute('select title, text from entries order by id desc')
        entries = cur.fetchall()
    except:
        init_db()
        entries = {}
    finally:
        return render_template('show_entries.html', entries=entries)

The above change tries to query the database and if there’s an exception, will attempt to call our init_db() function to create the SQLite database and set an empty entries variable.

3. Configure the Serverless deployment

Now we need to setup our Serverless deployment by setting the serverless.yml file to:

serverless.yml

service: test

provider:
  name: aws
  runtime: python2.7

exclude:
    - node_modules/**

plugins:
  - serverless-wsgi

functions:
  flaskr:
    handler: wsgi.handler
    events:
      - http: ANY {proxy+}
      - http: ANY /

custom:
  wsgi:
    app: flaskr.app

These settings tell AWS we have a Python based function to run, we’ll use the serverless-wsgi plugin, all http requests are to be handled by the WSGI handler to run our flaskr.app. Note we need to explicitly set events to respond for “/” even though we have the catch all entry of {proxy+} set. If we don’t, AWS API Gateway will return an error of Missing Authentication Token.

Let’s test this locally one last time before we deploy to AWS:

sls wsgi serve

You should get a message telling you where the Flask app is running (e.g. http://localhost:5000) to the one above, complete with a locally running instance of the flaskr app.

4. Deploy to AWS

Now we’re ready to deploy to AWS:

sis deploy -v

This can take a little bit to run, as this single command takes care of all the AWS setup for us to access our flaskr app via Lambda. When successful, you should see output towards the bottom similar to the following:

Stack Outputs
FlaskrLambdaFunctionQualifiedArn: arn:aws:lambda:us-east-1:770800358818:function:test-dev-flaskr:31
ServiceEndpoint: https://zwe667htwd.execute-api.us-east-1.amazonaws.com/dev
ServerlessDeploymentBucketName: test-dev-serverlessdeploymentbucket-s03hsyt0f0oz

Copy the URL from the ServiceEndpoint and access it in your web browser. You should see something familiar:

If you don’t get a response or see an error of some sort, it’s time to look at the logs on AWS:

sls logs -f flaskr -t

If you want to make changes to the app (e.g. change templates, modify routes in flaskr/flaskr.py, etc.) and don’t need to make changes to your serverless.yml file, you can do a much quicker re-deploy of the code (i.e. no AWS configuration):

sls deploy function --function flaskr
5. Remove from AWS

Once you’re done with these steps, it’s a good idea to remove your function so that you don’t get unexpectedly charged for anything related to this deployment. Don’t fret, it’s only one line to get it all running again. 🙂

sls remove

Where to from here?

I may get this example working with a proper database. If I do, I’ll be sure to add a new post explaining how I did that.