Hi everyone. I’ll be at LTSF 2019 (Learning Technologies Summer Forum) in London on 11th July 2019.
If you’re interested in the slides, drop me a line on LinkedIn.
Hi everyone. I’ll be at Unbound London for a talk at 12.05 on Thu 18th July 2019.
[together with Carolina]
In the past, running a Premortem has been the single most helpful exercise I’ve found for dealing with complex, risky projects. This is the core idea, but there’s a little more to it.
I’m not joking when I say that a premortem refocused the hardest death-march I’ve been on, and another premortem was a key step in planning for a complicated (and ultimately successful) $12m fund-raise. If all goes well, it’ll help highlight your biggest fears, and proactively figure out steps for how to defuse them.
In short, this is how I run them:
I was sitting in a sun-warmed pizza restaurant in London last week talking about deep learning libraries. Everyone had their favourites. I was betting on TensorFlow, the new kid in town released by Google in late 2015. In response, a Torch fan pointed out that Google may invest in building up TensorFlow internally, but there’s no reason for them to invest in the shared, external version.
This got me thinking – why has Google open sourced TensorFlow?
Naively, I usually assume that companies keep their most crown jewels proprietary while open sourcing the periphery. In other words, keep your secret sauce close to your chest – but share the stuff that’s more generic, since it builds brand and goodwill, others may contribute helpfully, and you’re not straightforwardly giving a leg-up to your direct competitors.
Google’s approach to open source has been a little more strategic than this. Look at a handful of their major open source projects – Android, Chromium, Angular, Go, Dart, V8, Wave, WebM. The motivations behind them are various:
For TensorFlow to make sense and be worthy of long-term support from Google, it needs to fall in the former category.
It is indeed a new version of an existing technology – it’s free, it’s better engineered, though not yet faster.
So, is it intended to either disrupt an incumbent, or to increase usage and thus drive revenue for core Google businesses? I can only think of two possibilities:
I want something like the above to be true, because I want there to be a strategic reason for Google to invest in TensorFlow, and I want it to get easier and easier to develop interesting and complex deep learning apps.
What if I suggested that you finish each day with nothing left on your todo list? This is the only rule of Todo Zero.
You might find yourself biting back some choice words. This sounds like unhelpful advice from someone with a much simpler life than yours.
Not so fast. Picture a world-class juggler with half-a-dozen balls in motion. How many balls do they have in their hands at once? None, one, or two. Never more than two. The remainder are in the air.
By analogy, work on just one or two things at a time. The remainder can be scheduled for some time in the future. In this way, it’s very possible to finish what’s currently on your list.
Otherwise, all of the competing priorities of a long list clamour for your attention. They clutter one another, making it impossible to focus. When you’re pulled in many directions, you’ll end up immobilized and demotivated.
At least that’s what has happened to me. My implicit solution was to procrastinate until panic seized me, and then enjoy its temporary clarity of focus.
So, here’s a recipe for Todo Zero that will take an hour or two to start with:
So, that leaves you with quick tasks that take less than 10 minutes, along with the one or two most urgent/important jobs for today.
Marvel at your wonderfully shortened todo list. Look away, take a deep breath. Do not look at your email. Make a coffee. Feel a little calmer than you did, and enjoy it.
Now, let’s do the same for your email.
Stand up, and take a deep breath. Walk around for a few minutes, and make a cup of coffee. This is going really well.
Take a break.
At this point, you’re close to the point where you have a clean slate, and just your important tasks. You probably have some meetings and stuff. Have lunch. Refresh.
With any luck, you made progress on those one or two most important tasks.
Armed with this approach, you can triage your own life. You can choose to focus on the most urgent or important things first, and ignore the rest. They’ll shamble back when their time has come, and then you can dispatch them in turn.
P.S. There are a few tools that will help:
P.P.S. One final note. I can’t juggle two balls, let alone six. So take that into account, seasoned with a pinch of salt, in reading this.
P.P.P.S. Of course, there is nothing that’s original here. It’s a death-metal-mashup of Inbox Zero and GTD. It’s not always feasible to work like this. If you don’t procrastinate, you probably don’t need it. Etc.
The best way to kill and bury your crusty old PHP system – replatforming a legacy system without your users noticing.
Hearing those words makes me feel like I’m tied mutely to a railway track, unable to scream for help as a train thunders towards me. We humans are walking sacks of blood, bile and bias, and estimating how long things will take brings out the worst in us.
A product manager recently asked me if one can get better at knowing whether things are easy or hard, and how long they will take. The good news is that with practice, you can help people estimate much better with your help than they would on their own.
Understand the problem you’re trying to solve.
If you don’t understand the problem well enough, you’re certainly blind to its potential complexities. Product managers are often in a *better* position than anyone else!
Understand what’s involved in the proposed solution(s).
This can be the trickiest part for non-engineers, because the details of the solution may sometimes be pretty arcane. Here’s what you can do:
Be aware and on the alert for pitfalls and cognitive biases that lead to poor estimations.
Human beings tend to be lazy about thinking through all the pieces for a complete solution (just focusing on the major parts, or the interesting parts, and ignoring the detail or the final 20% to make things perfect that takes all the time). They also tend to focus on the best case (if everything goes right) and ignore all the things that might go wrong. You never know what will go wrong, but if you have a sense of some possible pitfalls, you can factor them into your estimate. Possible approaches:
Learn from feedback.
Abe Gong asked for good examples of ‘data sidekicks‘.
I still haven’t got the hang of distilling complex thoughts into 140 characters, and so I was worried my reply might have been compressed into cryptic nonsense.
Here’s what I was trying to say:
Let’s say you’re trying to do a difficult classification on a dataset that has had a lot of preprocessing/transformation, like fMRI brain data. There are a million reasons why things could be going wrong.
Things could be failing for meaningful reasons, e.g.:
But the most likely explanation is that you screwed up your preprocessing (mis-imported the data, mis-aligned the labels, mixed up the X-Y-Z dimensions etc).
If you can’t classify someone staring at a blank screen vs a screen with something on it, it’s probably something like this, since visual input is pretty much the strongest and most wide-spread signal in the brain – your whole posterior cortex lights up in response to high-salience images (like faces and places).
In the time I spent writing this, Abe had already figured out what I meant 🙂