- Learn the lyrics to your favourite song, so you can sing along to it as you walk down the street.
- Oil a door handle that’s always been just a tiny bit awkward, with a can of WD-40 that has a long nozzle.
- Go to a pottery exhibition and buy a mug that feels really good in your hands. Enjoy drinking out of it every day.
- Pick one of your favourite old books and put it in the bathroom to dip in and out of when you feel like it.
- Put just one picture up on the wall. If you don’t have picture hooks, buy them now and then you’ll be ready next time.
- Notice another person in your organisation that you suspect wants to start a fire, and arrange to have lunch with them.
- Spray your favourite leather shoes to make them waterproof, and then smile smugly and gratefully next time you accidentally step in a puddle with them on.
- Create a playlist of songs that cheer you up, to have ready just in case.
- Carry a small Ziploc bag with a couple of tablets of medicines you find useful (e.g. Alka Seltzer for excess, Tums for heartburn, Zirtek & Beconase, for hayfever, Ibuprofen for headaches, Strepsils for sore throats). Replenish when you use one, and add to it when you wish you had something.
- Start a list of ‘Small wins’ of your own, and add the first item to it.
In the past, running a Premortem has been the single most helpful exercise I’ve found for dealing with complex, risky projects. This is the core idea, but there’s a little more to it.
I’m not joking when I say that a premortem refocused the hardest death-march I’ve been on, and another premortem was a key step in planning for a complicated (and ultimately successful) $12m fund-raise. If all goes well, it’ll help highlight your biggest fears, and proactively figure out steps for how to defuse them.
In short, this is how I run them:
- Gather a couple of other core people/project leads with complementary expertise. Allocate 2 uninterrupted hours (though you might not need all that time once you start to get efficient at them).
- Sit down and read the above Guardian article through together at the beginning of the session.
- Have fun telling each other the nightmare story. I usually set them at the next major milestone, perhaps within the next 6 months or so. Make it specific and vivid, e.g. ‘The CEO of X is on stage announcing their partnership in October with your major competitor’ or ‘We’re out of money and don’t exist, and you’re back to working at your previous job you hated.’
- Set a timer for 15 minutes or so, and separately in silence each scribble down as many potential reasons why this terrible future came to pass, e.g. ‘Their CEO tried it out and happened to get a buggy version and they lost faith in us’, or ‘The lawyers axed the project because of regulatory hurdles’. I usually use either Google Docs or postits to make the following phases easier.
- Race through everyone’s items out loud quickly. There will be duplicates and related issues – as you go through them, place them into groups.
- Then, for each group of issues, ask ‘what could we do to fix/de-risk this?’. [Maybe do this in silence individually first too]. You’ll start to see that there are a few things you could do that will help considerably de-risk multiple issues at the same time. Assign one person to be responsible for each approach you agree to act on.
- By this point, you should have felt like you’ve looked into the abyss, but come out the other side and achieved a measure of catharsis. There’s something about the perspective you get from doing this exercise that makes it much easier to make hard choices (I find).
- Set a date to revisit in 6 weeks (to make sure you actually addressed the risks you’ve just identified, and see where things stand).
I was sitting in a sun-warmed pizza restaurant in London last week talking about deep learning libraries. Everyone had their favourites. I was betting on TensorFlow, the new kid in town released by Google in late 2015. In response, a Torch fan pointed out that Google may invest in building up TensorFlow internally, but there’s no reason for them to invest in the shared, external version.
This got me thinking – why has Google open sourced TensorFlow?
Naively, I usually assume that companies keep their most crown jewels proprietary while open sourcing the periphery. In other words, keep your secret sauce close to your chest – but share the stuff that’s more generic, since it builds brand and goodwill, others may contribute helpfully, and you’re not straightforwardly giving a leg-up to your direct competitors.
Google’s approach to open source has been a little more strategic than this. Look at a handful of their major open source projects – Android, Chromium, Angular, Go, Dart, V8, Wave, WebM. The motivations behind them are various:
- Android, Angular, Chromium, V8, Wave, WebM – creating a new version of an existing technology (free, better engineered, or faster) to disrupt an incumbent, or increase usage and thus drive revenue for Google’s core businesses.
- Go, Dart and the long tail of minor projects are peripheral to their goals and serve less direct strategic interest.
For TensorFlow to make sense and be worthy of long-term support from Google, it needs to fall in the former category.
It is indeed a new version of an existing technology – it’s free, it’s better engineered, though not yet faster.
So, is it intended to either disrupt an incumbent, or to increase usage and thus drive revenue for core Google businesses? I can only think of two possibilities:
- TensorFlow is intended to be a major strategic benefit for Android. Machine learning is going to power a wave of new mobile applications, and many of them need to run locally rather than as a client-server app, whether for efficiency, responsiveness or bandwidth reasons. If TensorFlow makes it easier to develop cross-platform, efficient mobile machine learning solutions for Android but not for iOS, that could give the Android app market a major boost.
- TensorFlow is intended to be a major strategic benefit for Google’s platform/hosting, and to disrupt AWS. Right now, it’s pretty difficult and expensive to set up a cloud GPU instance. TensorFlow opens up the possibility of a granularly-scalable approach to machine learning that allows us to finally ignore the nitty-gritty of CUDA installations, Python dependencies, and multiple GPUs. Just specify the size of network you want, and TensorFlow allocates and spreads it across hardware as needed. This is why TensorBoard was part of the original implementation, and why AWS support was an afterthought. “Pay by the parameter”. If I had to guess, I’d say this is the major reason for open sourcing TensorFlow.
I want something like the above to be true, because I want there to be a strategic reason for Google to invest in TensorFlow, and I want it to get easier and easier to develop interesting and complex deep learning apps.
What if I suggested that you finish each day with nothing left on your todo list? This is the only rule of Todo Zero.
You might find yourself biting back some choice words. This sounds like unhelpful advice from someone with a much simpler life than yours.
Not so fast. Picture a world-class juggler with half-a-dozen balls in motion. How many balls do they have in their hands at once? None, one, or two. Never more than two. The remainder are in the air.
By analogy, work on just one or two things at a time. The remainder can be scheduled for some time in the future. In this way, it’s very possible to finish what’s currently on your list.
Otherwise, all of the competing priorities of a long list clamour for your attention. They clutter one another, making it impossible to focus. When you’re pulled in many directions, you’ll end up immobilized and demotivated.
At least that’s what has happened to me. My implicit solution was to procrastinate until panic seized me, and then enjoy its temporary clarity of focus.
So, here’s a recipe for Todo Zero that will take an hour or two to start with:
- Go through your todo list and pull out anything that’s going to take less than 10 minutes.
- Pick out the one or two jobs that you really want to tackle – these should be the most important or urgent things on your list. Break them down into pieces that you could tackle today if you really put your mind to it, and note them down.
- Schedule everything else as future events in your calendar (I usually just assign them to a date without a time). Give yourself enough room before the deadline to finish them without rushing. Don’t be over-optimistic about how many or how quickly you can work through them.
So, that leaves you with quick tasks that take less than 10 minutes, along with the one or two most urgent/important jobs for today.
Marvel at your wonderfully shortened todo list. Look away, take a deep breath. Do not look at your email. Make a coffee. Feel a little calmer than you did, and enjoy it.
Now, let’s do the same for your email.
- Install the Boomerang for Gmail plugin, and pay the $5/month personal subscription for it (read this if you have hardcore information security requirements).
- Find any emails that are going to take less than 10 minutes to reply to, and boomerang them for 2 hours’ time.
- Pull out one or two emails that are urgent or important, and boomerang them for 1 hour’s time.
- If you have the energy, boomerang each of your remaining emails for future times individually (tomorrow, a week away or a month away, depending on urgency). If you don’t have the energy, just boomerang them wholesale for tomorrow morning.
Stand up, and take a deep breath. Walk around for a few minutes, and make a cup of coffee. This is going really well.
- By the time you get back, you should be staring at a short todo list and a pretty clear inbox. [If anything new has landed, or any have boomeranged back, send them away for an hour. We need a clear head]
- Now, let’s dispatch the less-than-ten-minute odds & ends tasks. Do some of them, most of them, all of them, it doesn’t matter. Just a few, to get back a sense of momentum.
- Your most urgent emails have boomeranged back. Deal with them.
Take a break.
At this point, you’re close to the point where you have a clean slate, and just your important tasks. You probably have some meetings and stuff. Have lunch. Refresh.
- Now, it’s time to tackle those one or two important high-priority tasks-for-today.
- Picture yourself at the end of the day, leaning back in your chair with your hands knitted behind your head, smugly. For that to happen, double down on those one or two most important things, and the rest can wait. You will feel great.
- Don’t do anything else today. Don’t check your email if you can avoid it. Your goal is to boomerang away (by email or calendar) anything but them.
With any luck, you made progress on those one or two most important tasks.
Armed with this approach, you can triage your own life. You can choose to focus on the most urgent or important things first, and ignore the rest. They’ll shamble back when their time has come, and then you can dispatch them in turn.
P.S. There are a few tools that will help:
- Google Calendar – add a new ‘Todo’ calendar, whose notifications are set by default to email you at the time of the event.
- Boomerang for Gmail plugin – allows you to banish emails for as long as you choose.
- Any simple todo list app or text editor of your choosing. It doesn’t matter.
P.P.S. One final note. I can’t juggle two balls, let alone six. So take that into account, seasoned with a pinch of salt, in reading this.
P.P.P.S. Of course, there is nothing that’s original here. It’s a death-metal-mashup of Inbox Zero and GTD. It’s not always feasible to work like this. If you don’t procrastinate, you probably don’t need it. Etc.
The best way to kill and bury your crusty old PHP system – replatforming a legacy system without your users noticing.
Hearing those words makes me feel like I’m tied mutely to a railway track, unable to scream for help as a train thunders towards me. We humans are walking sacks of blood, bile and bias, and estimating how long things will take brings out the worst in us.
A product manager recently asked me if one can get better at knowing whether things are easy or hard, and how long they will take. The good news is that with practice, you can help people estimate much better with your help than they would on their own.
Understand the problem you’re trying to solve.
If you don’t understand the problem well enough, you’re certainly blind to its potential complexities. Product managers are often in a *better* position than anyone else!
Understand what’s involved in the proposed solution(s).
This can be the trickiest part for non-engineers, because the details of the solution may sometimes be pretty arcane. Here’s what you can do:
- You can go a long way by asking good questions about how things work, and what’s involved in the solution. Listen carefully to the answers. If they don’t make sense, ask for a higher-level explanation, or from a different person. Explain it back – that will make sure you’ve got it right and help you internalise it. Take good notes. Over time, you’ll start to see how the pieces interconnect, and what problems are similar to one another, and this will get easier and easier.
- Don’t ask for an estimate for the whole solution. Break the solution down into pieces, estimate the size of each piece, and add them back together. In my experience, people can’t reliably estimate how long things will take beyond a few hours – so if the estimates are much bigger than this, break the pieces down into smaller and smaller chunks.
- Be the rubber duck!
- Offer to pair-program with a developer during the unit testing. You’ll get a really deep understanding of how the system works, and where the difficulties lie. Better still, if you write your tests before writing your code, your test suite provides a kind of score card for how close you are to a solution, and you’ll reduce time spent in QA.
Be aware and on the alert for pitfalls and cognitive biases that lead to poor estimations.
Human beings tend to be lazy about thinking through all the pieces for a complete solution (just focusing on the major parts, or the interesting parts, and ignoring the detail or the final 20% to make things perfect that takes all the time). They also tend to focus on the best case (if everything goes right) and ignore all the things that might go wrong. You never know what will go wrong, but if you have a sense of some possible pitfalls, you can factor them into your estimate. Possible approaches:
- Start by asking out loud ‘what are the hidden traps, complications, edge cases, difficulties or things that could go wrong. When we did similar things in the past, how long did it end up taking? Were there surprise pitfalls that made it harder than we anticipated?’ Or run a premortem. You’ll get much better estimates after this discussion.
- Use Planning Poker as an estimation approach. Each person makes an estimate in isolation – this forces them to think things through, and avoids estimates being dominated by what was said first or most loudly. The discussion afterwards creates an informed consensus view, and provides immediate feedback for people whose estimates are wildly off.
- As a last resort: make an optimistic estimate and double it.
Learn from feedback.
- Force yourself (or the project team) to make an estimate in advance, then during the project retrospective, compare the actual time taken to the estimated time. That would be the best way for everyone to learn from feedback! ‘We thought it was going to be X, but it turned out to be 2X’.
- If things take much longer than anticipated, ask how we could have predicted this in advance. That might help you avoid similar estimation mistakes in future.
- Notice if certain kinds of tasks tend to take longer than anticipated.
- Notice if certain people tend to be inaccurate, and give them feedback on this.
Abe Gong asked for good examples of ‘data sidekicks‘.
I still haven’t got the hang of distilling complex thoughts into 140 characters, and so I was worried my reply might have been compressed into cryptic nonsense.
Here’s what I was trying to say:
Let’s say you’re trying to do a difficult classification on a dataset that has had a lot of preprocessing/transformation, like fMRI brain data. There are a million reasons why things could be going wrong.
Things could be failing for meaningful reasons, e.g.:
- the brain doesn’t work the way you think, so you’re analysing the wrong brain regions or representing things in a different way
- there’s signal there but it’s represented at a finer-grained resolution than you can measure.
But the most likely explanation is that you screwed up your preprocessing (mis-imported the data, mis-aligned the labels, mixed up the X-Y-Z dimensions etc).
If you can’t classify someone staring at a blank screen vs a screen with something on it, it’s probably something like this, since visual input is pretty much the strongest and most wide-spread signal in the brain – your whole posterior cortex lights up in response to high-salience images (like faces and places).
In the time I spent writing this, Abe had already figured out what I meant 🙂