Eroding our minds

I said that I thought “there’s something irresponsible about making money from advertising”.

Matt Weber was right to point out that although people hate the idea of targeted ads, they can be genuinely useful. Though I don’t think a very large proportion of the available advertising real estate offers the possibility for really great targeting.

[Of course, good advertising can be an art form in itself. And by funding most of our software and reading materials, advertising adds tremendous value to our lives.]

But even on the internet, most advertising still feels as though it’s about increasing our familiarity with the brand.

Think of advertising in terms of cognitive fluency, i.e. how easy we find something to process. There are lots of ways to make something fluent – make it easy to read, easy to pronounce, write it in a simple font, or in high contrast.

Things that are fluent (easy to process) get processed faster. We tend to like fluent things better, find fluent statements more valid. We think companies with fluent names are more valuable.

Advertisers have (implicitly) known this for a long time. By incessantly dinging our minds with an advert over and over, we are gently having that brand branded upon our minds, making it easier to process, more familiar, and making us unwittingly and unjustifiedly like it more. Like the banks of a river worn smooth by the ceaseless flow, advertising erodes our minds.

If you are not paying for it, you’re not the customer; you’re the product being sold.

Communal interactive jukebox

[I wrote this in 2003 – there are still pieces of this vision that haven’t been realized]

why isn’t there a little wireless didgeridoo that just sits next to a cd player (stereo audio input), with a wireless network card, and maybe an ip address or a network id or something that you can initially configure easily/remotely by plugging in a computer via a usb or something, that just sits there and plays whatever your laptop running winamp tells it to by wireless???

apparently these exist already ūüė¶
but they’re pretty crap at the moment – they’re proprietary, and are only just getting up to speed with 802.11b etc.

this doesn’t exist though:
you could set a password to it, and then anyone with a laptop nearby who knew its id and had the password, could wrest control of it, e.g. at a party. better still, you could have a sort of queuing system/software for allowing different users to place requests, and people could vote whether they like what’s playing and that person’s reputation would go up – like slashdot karma – it would be a sort of communal interactive jukebox

House MD, gibberish and clown school

I’d been watching a lot of House MD a while ago. Perhaps 1/3 of the show is taken up with mystical medical mumbo-jumbo that’s gibberish to me, and yet it’s still compelling. How can that be? It’s like watching a soap opera in a foreign language.

Then I talked to a friend who went to clown school, who didn’t find it at all surprising. He told me about his 40-minute¬†graduation¬†show that transfixed and amused audiences using only nonsense words.

House MD as a modern medical jabberwocky makes a certain amount of sense.

What’s blowin’ in the wind?

[Thanks to Stephen Hartley-Brewer for the kernel of this idea]

My brother is my musical weather vane, my song-canary who knows what’s good long before the rest of the world has cottoned on, and points out the things I’ll like. I treasure his advice, partly because it’s good, and partly because it comes from him.

But just for badness, pretend you don’t have a brother with his ear to the ground. Instead, you have a big computer that you’ve fed the listening habits of the entire world. Ask it two questions:

1) who is listening to music that’s hugely popular (or that I like) a year ago?

Those are your early adopters.

2) what are they listening to now?

Throw in some collaborative filtering to tailor the recommendations for me.

Are iTunes or Last.fm doing this to predict who’s going to be big next year?

Are the record labels doing this to predict where they should put their marketing money?

Can bands buy this information to figure out which users to send promo albums to?

If there was a prediction market for music, could my brother get paid to tell people what he likes and dislikes?

Dropbocumentation

Every time a programmer goes away for a few days, a piece of infrastructure they know best breaks. That’s just Murphy’s Algorithm.

If they they had only written a 100-word overview with some examples, that would have saved someone else a painstaking day figuring out how things should work, why they suddenly don’t, and righting the world once more.

How do you make it likely that everyone writes down what they know while it’s still fresh? Think of edits as conversions (in the analytics sense) – our funnel stretches from signup to viewing to editing, and we want to maximize the number of edits.

How do we optimize the ‘edit’ conversion rate for a wiki?

  • Editing should happen in the same mode as viewing.¬†If you have to click ‘edit’, then wait for a page refresh, then scroll down inside a teeny textbox in a browser, then hit save to see your changes … those steps create a barrier to entry. The conversion rate of views to edits will drop dramatically. Typos, inaccuracies, inscrutabilities and out-of-datenesses will accumulate.
  • It needs to be as available as possible.¬†All and only your team can access and contribute to it, even if they’re on a different computer or offline.
  • Consolidate everything in one or two searchable places.¬†When it’s hard to find something, you won’t want to start looking. If you have to search one by one through a wiki, your email, a bug tracker, the version control commit log and comments in the codebase, you’ll end up just tapping someone on the shoulder – the knowledge will never get planted in a way that it can grow.
  • No special knowledge.¬†Wiki markups are confusing and confusable. WYSYWYG editors are a good start – but editing text in most browser textboxes feels like typing with chopsticks. And proprietary document formats are opaque and constricting.
  • No barrier to exit.¬†I want to be able to easily (and ideally automatically) grab a dump of all our documentation, both as a backup and as an export.

After reviewing these possibilities over and over, these are the best solutions I’ve come up with for Memrise:

  • A few monolithic Google Docs. This has worked reasonably well, except that Google Docs still falters in an unwieldy and buggy way when dealing with even medium-sized documents. Boooo!
  • Etherpad clone. They seem pretty expensive for multi-user monthly subscriptions, and seem weak at linking and searching. Plus, they don’t work offline, and I don’t trust the companies behind them to be around in 5 years’ time.
  • Text files in Dropbox. The main downside to this is that you can’t easily inter-link text files, and they lack formatting which makes them ugly to read. But they have no barriers to entry whatsoever.

    In an ideal world, someone would build a nice (optionally hosted?) wiki solution pulling and formatting Dropbox text files as webpages to give you the best of both worlds, perhaps combined with a few desktop apps and extensions to make offline viewing editing more pleasant.

    A hivemind with a sense of humor

    I’m a little obsessed by the notion of a noosphere, a humming hivemind – not a humdrum, roaring average, but rather a superlinear interwoven sum of wits.

    The Bible has this quality, with its sea of voices that are unabashedly inconsistent and yet superhumanly wise. But what I find most unsettling about the Bible is its lack of humor. To my knowledge, there’s not a jot of wit, humor or silliness in the whole thing. Perhaps this befits something with a purpose greater than simply sublime literature, but that dehumanizes it to me.

    So, it’s endearing and cheering to see that Google can giggle [from autocompleteme.com]:

    Though more soberingly, see what boyfriends and girlfriends search for.

    How to beat an fMRI lie detector

    In a not-so-distant dystopia, you might be placed in a brain scanner to test whether you’re telling the truth. Here’s how to cheat.

    The polygraph

    First, you’ll need some background on old-school lie-detection technology. [This is a simplified story – see polygraphs for a richer account.] Polygraphs are seismographs for the nervous system. They measure physiological responses such as heart rate, blood pressure, sweatiness through skin conductance, and breathing. When you’re anxious, angry, randy, in pain, or otherwise emotionally aroused, these measures spike automatically. The effort and stress of lying also causes them to spike.

    Of course, if you’re trapped in a windowless room on trial for murder, these measures will probably be pretty high to begin with. So you’ll first be asked a few control questions to assess your baseline levels when lying and telling the truth, against which your physiological response to the important questions will be compared.

    So, if you want to beat a polygraph, you need either keep your physiological responses stable when you lie (which is difficult), or you need to artificially elevate your baseline response when telling the truth. The age-old technique is to place a thumb-tack in your shoe, and press on it painfully with your toe when telling the truth, spiking your physiological responses, and providing a misleading control so that your lies don’t seem higher relatively.

    Functional magnetic resonance imaging

    Now, on to fMRI. Simplifying again, the fMRI brain scanner takes a reading of the level of metabolic activity at thousands of locations around your brain every couple of seconds. Activity in a number of brain areas tends to be elevated when we lie, perhaps because we have to work harder to invent and keep track of the extra information involved in a lie, and override the default responses in the rest of the brain. Under laboratory conditions, accuracy at distinguishing truth from lie approaches 100%.

    The modern machine learning algorithms used to make sense of the richer neural data are more sophisticated than those used in a polygraph. And they’re measuring your brain activity (albeit indirectly), so it might feel as though there’s no way to deceive them. But ultimately, they work in an analogous way to the polygraph, by comparing your neural response to the important questions with your neural response to the baseline questions. That means that they can be gamed in an analogous way – as you’re being asked the baseline questions, wiggle your head, take a deep breath, do some simple arithmetic or tell a lie in your head. Each of these will elevate the neural response artificially. By disrupting the baseline response, you disrupt the comparison.

    Possible flaws in this argument

    This simplified account of how to cheat an fMRI lie detector has some issues.

    Firstly, it rests on the idea that we’ll still use some kind of comparison between baseline and important questions. In the case of most recent fMRI analyses, this is certainly true. Although they use modern machine learning classification algorithms to compare against baseline, they still seem subject to the same problems as the simpler statistical tests used in polygraphs.

    Above, I suggested taking a deep breath, doing simple arithmetic or telling a lie in your head during the baseline questions. Taking a deep breath increases the BOLD response measured by fMRI throughout your brain. The idea behind doing arithmetic or telling a lie in your head is to engage the brain areas involved in internal mental conflict detection (between areas of the brain that are pulling in different directions), executive control (over the rest of your brain), and working memory whose activity changes when lying. As far as I know, all of the studies on lie detection seem to use naive participants, and no one has yet tested the efficacy of these counter-measures.

    I have also assumed that the analysis would be run ‘within subject’. In other words, the machine learning classifier algorithms would be making a comparison between baseline and important questions for the *same person*. However, there have been attempts to train the algorithms on a corpus of data from multiple participants beforehand, and then applied to a new brain. This approach is considerably and inherently less accurate (less than 90% as opposed to nearly 100%) since everyone’s brain is different, and since brain activity will probably vary for different kinds of lies. Indeed, there appears to be variability in the areas that have been identified by different experiments.

    There are alternative experimental paradigms to the basic questioning approach described here. For instance, one might show someone the scene of a crime, and look to see whether their brain registers familiarity. I haven’t looked into this approach. But fundamentally, this familiarity assessment is much more limited in the kinds of questions that can be asked, and furthermore, you only get one chance to assess someone’s familiarity (after which the stimulus is, by definition, familiar). That single response simply might not be enough data to go on.

    All of the studies so far have employed ‘willing’ participants. In other words, the participants kept their heads still, told the truth when they were asked to, and lied when they were asked to. An uncooperative participant might move around more (blurring the image), show generally elevated levels of arousal that could skew their data, be in worse mental or physical condition, and come from a different population than the predominantly white, young, relaxed, intelligent and willing undergraduate participants. We don’t know how these factors change things, and it’s difficult to see how we might collect reliable experimental data to better understand them.

    I haven’t considered alternative imaging methodologies here (such as EEG or infrared imaging). Mostly though, fMRI appears to be leading the field in terms of accuracy and effort spent, and all of these arguments should apply to EEG and other methods equally.

    Why am I writing this?

    There are a number of fMRI-based lie detection startups attracting government funding and attempting to charge for their services. I don’t begrudge them their entrepreneurial ambition, but I am dismayed by their hyperbolic avowals of success.

    In truth, this is a new, mostly unproven technology that seems to work fairly well in laboratory conditions. But it’s subject to the same sensitivity/specificity tradeoffs that plagues medical tests and traditional lie detection technologies. The allure of an ostensibly direct window into the mind with the shiny veneer of scientific infallibility is a beguiling combination.

    Eventually, the limitations of this technology will be realized. I’d prefer to see this techno-myth punctured and caution exercised now, rather than after costly mistakes have been made. Cheeringly, the courts appear to take the same view (at least so far).

    My credentials

    I’m finishing my PhD in the psychology and neuroscience of human forgetting at Princeton. I’ve worked on the application of machine learning methods to fMRI for the last few years, was part of the prize-winning team in the Pittsburgh fMRI mind-reading competition, and lead the development of a popular software toolbox for applying these algorithms for scientific analysis. However, I have no expertise in the neuroscience of cognitive control, lie detection or law.

    So I apologize if I’m wrong or out of date anywhere here. If so, I’d be glad to see this pointed out and to amend things.