Why has Google open sourced TensorFlow?

I was sitting in a sun-warmed pizza restaurant in London last week talking about deep learning libraries. Everyone had their favourites. I was betting on TensorFlow, the new kid in town released by Google in late 2015. In response, a Torch fan pointed out that Google may invest in building up TensorFlow internally, but there’s no reason for them to invest in the shared, external version.

This got me thinking – why has Google open sourced TensorFlow?

Naively, I usually assume that companies keep their most crown jewels proprietary while open sourcing the periphery. In other words, keep your secret sauce close to your chest – but share the stuff that’s more generic, since it builds brand and goodwill, others may contribute helpfully, and you’re not straightforwardly giving a leg-up to your direct competitors.

Google’s approach to open source has been a little more strategic than this. Look at a handful of their major open source projects – Android, Chromium, Angular, Go, Dart, V8, Wave, WebM. The motivations behind them are various:

  • Android, Angular, Chromium, V8, Wave, WebM – creating a new version of an existing technology (free, better engineered, or faster) to disrupt an incumbent, or increase usage and thus drive revenue for Google’s core businesses.
  • Go, Dart and the long tail of minor projects are peripheral to their goals and serve less direct strategic interest.

For TensorFlow to make sense and be worthy of long-term support from Google, it needs to fall in the former category.

It is indeed a new version of an existing technology – it’s free, it’s better engineered, though not yet faster.

So, is it intended to either disrupt an incumbent, or to increase usage and thus drive revenue for core Google businesses? I can only think of two possibilities:

  1. TensorFlow is intended to be a major strategic benefit for Android. Machine learning is going to power a wave of new mobile applications, and many of them need to run locally rather than as a client-server app, whether for efficiency, responsiveness or bandwidth reasons. If TensorFlow makes it easier to develop cross-platform, efficient mobile machine learning solutions for Android but not for iOS, that could give the Android app market a major boost.
  2. TensorFlow is intended to be a major strategic benefit for Google’s platform/hosting, and to disrupt AWS. Right now, it’s pretty difficult and expensive to set up a cloud GPU instance. TensorFlow opens up the possibility of a granularly-scalable approach to machine learning that allows us to finally ignore the nitty-gritty of CUDA installations, Python dependencies, and multiple GPUs. Just specify the size of network you want, and TensorFlow allocates and spreads it across hardware as needed. This is why TensorBoard was part of the original implementation, and why AWS support was an afterthought. “Pay by the parameter”. If I had to guess, I’d say this is the major reason for open sourcing TensorFlow.

I want something like the above to be true, because I want there to be a strategic reason for Google to invest in TensorFlow, and I want it to get easier and easier to develop interesting and complex deep learning apps.

“Oh, that should be easy – maybe a few minutes…”

Hearing those words makes me feel like I’m tied mutely to a railway track, unable to scream for help as a train thunders towards me. We humans are walking sacks of blood, bile and bias, and estimating how long things will take brings out the worst in us.

A product manager recently asked me if one can get better at knowing whether things are easy or hard, and how long they will take. The good news is that with practice, you can help people estimate much better with your help than they would on their own.

Understand the problem you’re trying to solve.

If you don’t understand the problem well enough, you’re certainly blind to its potential complexities. Product managers are often in a *better* position than anyone else!

Understand what’s involved in the proposed solution(s).

This can be the trickiest part for non-engineers, because the details of the solution may sometimes be pretty arcane. Here’s what you can do:

  • You can go a long way by asking good questions about how things work, and what’s involved in the solution. Listen carefully to the answers. If they don’t make sense, ask for a higher-level explanation, or from a different person. Explain it back – that will make sure you’ve got it right and help you internalise it. Take good notes. Over time, you’ll start to see how the pieces interconnect, and what problems are similar to one another, and this will get easier and easier.
  • Don’t ask for an estimate for the whole solution. Break the solution down into pieces, estimate the size of each piece, and add them back together. In my experience, people can’t reliably estimate how long things will take beyond a few hours – so if the estimates are much bigger than this, break the pieces down into smaller and smaller chunks.
  • Be the rubber duck!
  • Offer to pair-program with a developer during the unit testing. You’ll get a really deep understanding of how the system works, and where the difficulties lie. Better still, if you write your tests before writing your code, your test suite provides a kind of score card for how close you are to a solution, and you’ll reduce time spent in QA.

Be aware and on the alert for pitfalls and cognitive biases that lead to poor estimations.

Human beings tend to be lazy about thinking through all the pieces for a complete solution (just focusing on the major parts, or the interesting parts, and ignoring the detail or the final 20% to make things perfect that takes all the time). They also tend to focus on the best case (if everything goes right) and ignore all the things that might go wrong. You never know what will go wrong, but if you have a sense of some possible pitfalls, you can factor them into your estimate. Possible approaches:

  • Start by asking out loud ‘what are the hidden traps, complications, edge cases, difficulties or things that could go wrong. When we did similar things in the past, how long did it end up taking? Were there surprise pitfalls that made it harder than we anticipated?’ Or run a premortem. You’ll get much better estimates after this discussion.
  • Use Planning Poker as an estimation approach. Each person makes an estimate in isolation – this forces them to think things through, and avoids estimates being dominated by what was said first or most loudly. The discussion afterwards creates an informed consensus view, and provides immediate feedback for people whose estimates are wildly off.
  • As a last resort: make an optimistic estimate and double it.

Learn from feedback.

  • Force yourself (or the project team) to make an estimate in advance, then during the project retrospective, compare the actual time taken to the estimated time. That would be the best way for everyone to learn from feedback! ‘We thought it was going to be X, but it turned out to be 2X’.
  • If things take much longer than anticipated, ask how we could have predicted this in advance. That might help you avoid similar estimation mistakes in future.
  • Notice if certain kinds of tasks tend to take longer than anticipated.
  • Notice if certain people tend to be inaccurate, and give them feedback on this.

Blogging with WordPress and Emacs

When it comes to tools, I am a hedgehog rather than a fox. I like to have a small number of tools, and to know them well.

I recently resolved to start writing again. But I decided that I needed to sharpen my pencils first.

I have plans on how publishing and sharing should work. Grand plans. Too grand, perhaps.

So for now, I wrote something simple for myself. Now I can type away, press buttons… publish.

If you like Emacs, Python and WordPress, this might be interesting to you too. If not, it certainly won’t be.

wordpress-python-emacs GitHub repository

Most of the work is being done by this great Python/Wordpress library. Thank you.

I wrote some simple Python scripts. One grabs all my existing blog posts. One looks through their titles, and checks them against the filename to see if this is a new post.

And then there’s a very simple Emacs function that calls them to save/publish the current text file.

I could add more things: deleting posts, or a proper workflow for moving from draft to published. Maybe later.

I wrote this post, then hit M-x wordpress-publish-this-file.

Self Control through software

Leo Efstathiou asked me recently whether I’d rather be smarter, or have more willpower. It took only a moment’s thought to realize that I’d rather have the self-control any time.


And so it was with a sense of wonder and optimism that I normally reserve for sunrises that I fired up Self Control: a Mac application that completely blacklists parts of the Internet. Like a gaoler with a blackjack, Self Control coshes any attempt to blunder down rabbit holes like Facebook or email for some time period you specify. It’s absolutely and delightfully watertight.


The beauty of this is its potential long-term effect. I want to counteract the variable reinforcement schedule that email and blogs provide – with Self Control’s help, I’m hoping to ensure zero reward from them for long enough to break the self-perpetuating cycle of reflexive refresh-pressing.

My take on emacsclient

Emacs is pretty lightweight relative to most modern editors, though by the time it loads all the modes and gets through all the uncompiled junk in my .emacs configuration, you wouldn’t know it.

Emacsclient is the solution – when you open a file with emacsclient, it doesn’t start up a whole new emacs – it just opens it in the running emacs, which is more or less instantaneous.

There are lots of webpages on this, so I won’t go into detail. Unfortunately, although you’d hope that calling ’emacsclient’ would work just like ’emacs’, this isn’t true:

  • I wanted to tell emacsclient to display an emacs frame, without feeding it any filename arguments to display. No dice.
  • If I don’t have a running emacs server (necessary for emacsclient to connect to), it just gives you an error message, rather than taking matters into its own hands and opening up a new emacs instance.
  • If I have emacs running on one computer, and I ssh into that computer, then I want to be able to type emacsclient and have a window show up. No dice. You have to explicitly feed it a display.

These issues were annoying enough that I nearly stopped using emacsclient. Various people have offered shell script hacks that handle some of these issues and more, but none of them solved the ones that bothered me. Plus, I like Python and I hate shell scripts. So I offer up a teeny python script, emf-on-display. If you alias ’emf-on-display’ to something handy like ‘e’, then it makes emacsclient behave in a way that I find much more consistent with emacs.

P.S. I have not dealt with the possibility that you’re running this in the terminal, and you don’t have an X11 display at all.

P.P.S. I suspect that it requires emacs 22 (or gnuclient) to work, since it relies on being able to pass ‘make-frame-on-display’ as elisp code to be evaluated.