Why has Google open sourced TensorFlow?

I was sitting in a sun-warmed pizza restaurant in London last week talking about deep learning libraries. Everyone had their favourites. I was betting on TensorFlow, the new kid in town released by Google in late 2015. In response, a Torch fan pointed out that Google may invest in building up TensorFlow internally, but there’s no reason for them to invest in the shared, external version.

This got me thinking – why has Google open sourced TensorFlow?

Naively, I usually assume that companies keep their most crown jewels proprietary while open sourcing the periphery. In other words, keep your secret sauce close to your chest – but share the stuff that’s more generic, since it builds brand and goodwill, others may contribute helpfully, and you’re not straightforwardly giving a leg-up to your direct competitors.

Google’s approach to open source has been a little more strategic than this. Look at a handful of their major open source projects – Android, Chromium, Angular, Go, Dart, V8, Wave, WebM. The motivations behind them are various:

  • Android, Angular, Chromium, V8, Wave, WebM – creating a new version of an existing technology (free, better engineered, or faster) to disrupt an incumbent, or increase usage and thus drive revenue for Google’s core businesses.
  • Go, Dart and the long tail of minor projects are peripheral to their goals and serve less direct strategic interest.

For TensorFlow to make sense and be worthy of long-term support from Google, it needs to fall in the former category.

It is indeed a new version of an existing technology – it’s free, it’s better engineered, though not yet faster.

So, is it intended to either disrupt an incumbent, or to increase usage and thus drive revenue for core Google businesses? I can only think of two possibilities:

  1. TensorFlow is intended to be a major strategic benefit for Android. Machine learning is going to power a wave of new mobile applications, and many of them need to run locally rather than as a client-server app, whether for efficiency, responsiveness or bandwidth reasons. If TensorFlow makes it easier to develop cross-platform, efficient mobile machine learning solutions for Android but not for iOS, that could give the Android app market a major boost.
  2. TensorFlow is intended to be a major strategic benefit for Google’s platform/hosting, and to disrupt AWS. Right now, it’s pretty difficult and expensive to set up a cloud GPU instance. TensorFlow opens up the possibility of a granularly-scalable approach to machine learning that allows us to finally ignore the nitty-gritty of CUDA installations, Python dependencies, and multiple GPUs. Just specify the size of network you want, and TensorFlow allocates and spreads it across hardware as needed. This is why TensorBoard was part of the original implementation, and why AWS support was an afterthought. “Pay by the parameter”. If I had to guess, I’d say this is the major reason for open sourcing TensorFlow.

I want something like the above to be true, because I want there to be a strategic reason for Google to invest in TensorFlow, and I want it to get easier and easier to develop interesting and complex deep learning apps.

Sanity checks as data sidekicks

Abe Gong asked for good examples of ‘data sidekicks‘.

I still haven’t got the hang of distilling complex thoughts into 140 characters, and so I was worried my reply might have been compressed into cryptic nonsense.

Here’s what I was trying to say:

Let’s say you’re trying to do a difficult classification on a dataset that has had a lot of preprocessing/transformation, like fMRI brain data. There are a million reasons why things could be going wrong.

(sorry, Tolstoy)

Things could be failing for meaningful reasons, e.g.:

  • the brain doesn’t work the way you think, so you’re analysing the wrong brain regions or representing things in a different way
  • there’s signal there but it’s represented at a finer-grained resolution than you can measure.

But the most likely explanation is that you screwed up your preprocessing (mis-imported the data, mis-aligned the labels, mixed up the X-Y-Z dimensions etc).

If you can’t classify someone staring at a blank screen vs a screen with something on it, it’s probably something like this, since visual input is pretty much the strongest and most wide-spread signal in the brain – your whole posterior cortex lights up in response to high-salience images (like faces and places).

In the time I spent writing this, Abe had already figured out what I meant 🙂

The Pittsburgh EBC competition

Try and picture the scene: you’re in a narrow tube in almost complete darkness, there’s a loud thumping noise surrounding you and you’re watching episodes of the 90s sitcom, ‘Home Improvement’, with Tim The Tool Man Taylor and his family. There’s a panic button in case you feel claustrophobic, but it’s all over in less than an hour. It sounds a little surreal, but that’s what it would have been like to be a subject whose functional magnetic resonance imaging (fMRI) brain data was used in last year’s Pittsburgh Brain Analysis Competition.

After you’ve watched three episodes, kindly folk in glasses and white coats would take you out of the scanner bore, give you a glass of water and then over the next few days, they’d ask you to watch those same three episodes again over and over. On the second viewing, they’d ask you ‘How amused are you?’ every couple of seconds. On the third viewing, they’d keep wanting to know how aroused you are on a moment-by-moment basis. Then, ‘Can you see anyone’s face on the screen?’, ‘Is there music playing?’, ‘Are people speaking?’ and so on, until you’ve watched every moment of every episode thirteen times, each time being asked something different about your experience.

Our job, as a team entering the competition, was to try and understand the mapping between your brain data and the subjective experiences you reported. For two of the episodes, we were given your brain data along with the thirteen numbers for every corresponding moment that described your arousal, amusement, whether there were faces on the screen, music playing, people speaking etc. Our team, comprising psychologists, neuroscientists, physicists and engineers, put together a pipeline of algorithms and techniques to whittle down your brain to just the areas we needed and smooth away as much of the noise and complexity as possible. Think of these first two episodes as the ‘training’ data. Then, we were given only the brain data for the third episode, the ‘test’ episode, from which we had to predict the reported experience ratings.

Our predictions were then correlated with the subjects’ actual reports, and we were given a score. We ended up coming second in the whole competition, and we’re hoping for the top spot in 2007. Much of this effort has had a direct payoff for our day-to-day research. We now routinely incorporate a lot of these machine learning techniques when trying to understand the representations used by different neural systems, and how they relate to behavior.

Members of the team: David Blei, Eugene Brevdo, Ronald Bryan, Melissa Carroll, Denis Chigirev, Greg Detre, Andrew Engell, Shannon Hughes, Christopher Moore, Ehren Newman, Ken Norman, Vaidehi Natu, Susan Robison, Greg Stephens, Matt Weber, and David Weiss