Feeds:
Posts
Comments

It seems that Total Quality Management has come to the spam business.

Used to be, you could tell spam emails by the weird names, non-English phrasing and requests that you’d have to be severely caffeine-deprived to respond to. One email I received, pretending rather unsuccessfully to be from a bank, had no less than 10 spelling mistakes in it. If you are in the spam business, your job is to get people to click your links or install your malicious software – but luckily, judging from the ample evidence, most professional spammers are, well, not very professional.

That, however, may be changing. A few days ago, I found the following email in my inbox:

Subject: Statement of fees 2008/09

Please find attached a statement of fees as requested, this will be posted today. The accommodation is dealt with by another section and I have passed your request on to them today.

Kind regards.
Margarito

 

Quite cleverly done, and almost indistinguishable from a real business email. If it had caught me in a less astute moment, I might actually have opened the attached zip file. And over the last few weeks, I’ve also seen spam messages that are virtually identical to the legitimate requests that eBay sends out; I am sure that many an eBay user has already fallen into the trap. I suspect it is only a matter of time before I get a friend request from Facebook or LinkedIn that looks perfectly genuine, but is in reality the work of a clever spammer.

In short, it seems that professional spammers are getting better at their jobs. That’s bad news for the rest of us, but it is still interesting to follow how they are evolving to compete with their counterparts in the anti-spam business.

The only thing I am wondering is, why hasn’t this professionalisation of the spam business happened earlier?

In the creative industries, there is a persistent belief that the gatekeepers – movie producers, book publishers, talent scouts – seek truly original ideas. Nothing could be more wrong.

Everywhere, the scene is the same. When the time comes for the gatekeepers of the creative industries to say what kind of new content they are looking for, one word pops up again and again: originality. Hollywood moguls declare that the hunt is on for original movie manuscripts. Broadcasters sit on crowded MIPTV discussion panels and state that they look for unique, original ideas for new prime time television series. At literary seminars, book publishers announce that they are on the lookout for original voices, novel approaches to the novella. Everywhere, originality is lauded as the key ingredient, the central requirement for making it through the very narrow gates that separate the directors and the authors from the multitudes of hopeful wannabes.

In fact, nothing could be further from the truth. All the talk about true originality is best understood as a sort of industry jargon, not to be taken at face value. The truth is that the vast majority of gatekeepers are looking for unoriginal ideas – old ideas in a slightly new wrapping, just different enough from the predecessors to avoid the most abject accusations of plagiarism. There are gatekeepers out there who on occasion take a chance on something truly, radically original, but they are few and far between – and for good reason.

The issue is not that the gatekeepers are conservative by nature; they simply wish to keep their jobs, and doing that means becoming risk-averse. The creative industries are all marred by one ugly fact: they have very high failure rates when it comes to predicting what new products will actually work in the market. And unlike other industries, where consumers happily consume the exact same product for years – think of Coca-Colas or Big Macs – consumers of creative products continuously crave small doses of change. They want their Big Mac to taste slightly different every time they eat – only they will not accept just any different taste, it has to be both new and good.

Getting this mix right has proven to be notoriously difficult for the gatekeepers of the creative industries; despite decades of looking, nobody has yet found the strategy that allows them to pinpoint the winners with any certainty (at least not before most of the production budget is spent). In the worst cases, like Hollywood, 95 percent of all movies fail to make a significant profit.

This is where the incentive structures that disfavor originality kick in – at the individual level. As a typical gatekeeper in mainstream Hollywood, commissioning a mediocre movie is not going to hurt your career significantly, because that is what most of your peers do in a given year, anyway. It is only the real flops, the catastrophic failures that can really hurt you. So, since success is unpredictable – it is basically a numbers game – the trick is to stay in the game long enough to get lucky. You do that by following a different, more viable strategy: avoid the risky-looking projects. Don’t bet on dark horses, or on new ones. Go instead for the safest bets: take something that has been proven to work, tweak one little detail, and pray. Specifically, pray that whatever it was that you tweaked, it hasn’t messed up the unfathomable inner workings of the tried-and-tested product line you based it on, and whose proven success formula you are now hoping to piggyback on.

This is the reason why in the creative industries, real originality is dangerous, at least in large quantities. Originality is an expensive and volatile spice, something content producers sprinkle cautiously on top of an old favourite recipe to lend it a veneer of novelty; it is never the main ingredient, not in mainstream media. Don’t blame the gatekeepers for this; if anything, blame the consumers, or perhaps the nature of the business, which on average punishes risk-taking more than it rewards it, at least for incumbents. People who want to make a living of selling ideas will do well to remember that in the media industry, ‘original’ is really just another word for ‘untested’.

Cool website that allows normal people to do microfinance:

www.kiva.org

I think there could be a business idea in creating a similar site for normal entrepreneurs. Forget about getting 20 m dollars from venture capital companies or business angels – rather, find projects that can be started for less than 20.000 dollars, and post them on a similar site, so normal people can invest in them. While some work needs to get done to prevent abuse of the system, I am sure it can be done.

Like radioactive atoms, words have a half-life. Or rather, they have a recharge time, which is the time (or length of text) from you use it the first time til you can use it again without vexing the reader.

Basically, when people read a book, they are not very conscious of the actual words and letters – they generally focus on the meaning that the words convey. But if the writer starts using the same words too much, the readers are torn from their state of absorbtion, as they suddenly become conscious of the actual words – something that also happens with grave spelling mistakes. To me, ignoring the rules of reusability is a sign of sloppy writing (disregarding the times when writers use the effect intentionally, as in poetry).

The most ordinary words have a very short half-life; they are highly reusable. Words like ‘you’, ‘me’, ‘man’, and ‘dog’, for instance, can be used constantly without fear of disrupting the reader’s mental flow. Only in the cases where these words are repeated – like the constellation ‘had had’ which is legitimate, but habitually avoided by editors – will they risk perplexing or annoying the reader.

Most other words are somewhat less reusable. ‘Subtle’ needs a few paragraphs, maybe a few pages, before it once again passes below the reusability radar. You can use ‘flummoxed’ or ‘insidious’ more than once, but there has to be a good bit of space between them. ‘Lugubrious’ needs at least a chapter, if not more. The rule seems to be that the more unusual the word, the less you can repeat it. (By the way, there is probably an interesting conclusion to be drawn here involving Zipf’s Law, but I’m not sure what it is.)

One of the best low-reusability examples I know is the word ‘nadir’. Nadir, normally used figuratively to describe the low point of something, i.e. ‘the nadir of my high school years’, is a beautiful expression. (Its better-known opposite is ‘zenith’, the high point of something – both words come from Arabic, from the field of astronomy). But use nadir two times in the same book, as it occurred in a story I just read, and it hits the reader with all the linguistic subtlety of a truckload of bricks. It seems, well, clumsy. Nadir is just one of those once-a-book words.

The same goes for some idioms and self-crafted expressions. On page 14 of the otherwise excellent book ‘Stiff’, telling the story of what happens to the human body after we die, the author Mary Roach uses the expression ‘a conversational curveball’ to good effect. But 160 pages later, when she describes something as ‘a philosophical curveball’, it stopped me dead in my tracks. Using the curveball metaphor twice, even that far apart, seems like an editorial oversight.

Jasper Fforde, an author who does interesting things with the English language, and who was a partial inspiration for this post, thought up a new device for use in literary settings, which he called an echolocator. An echolocator is a person who scans texts to detect if the same word is used twice within 15 words of each other. I don’t know why he chose the number 15, but in the same vein, it could be interesting (and utterly pointless, but interesting things often are) to create a quantitative index of the reusability of the words in the English language.

Re. the phenomenon described in the ‘Musical redemption’ post below, I thought of another, interesting manifestation of the same thing. It happens when I write notes to myself.

I normally walk around with a couple of blank record cards in my pocket. Whenever some stray thought hits me, I write it down. Sometimes, if I’m really enthusiastic about an idea, I put lots of exclamation marks on it, double underscore, that kind of thing.

Two weeks later, when I pull out the record card again and look it over, I am completely clueless as to what some of my own notes mean. I look at a note and think “Now what the hell was I thinking when I wrote that?” I literally cannot guess or remember what the idea was, based on my disjointed scribblings.

What I think happens here is the exact same thing as with the musical experiment, where the sender ‘fills out’ the communication with details in his own head. Me-in-the-past writes something down that makes perfect sense to myself, based on the tapestry of thoughts that I have in my head when I write it. Two weeks later, when me-in-the-future reads the note, the background thoughts are not there to inform the reading, making it a lot harder to remember what the note was supposed to mean. Me-in-the-past simply fails to see that the sentence “frame publish – slush!” will not necessarily be clear to me-in-the-future. For this reason, I have now started to write (what seems like) overly extensive notes to myself, with some success.

This, by the way, is an instance of something I find fascinating, namely intrapersonal communication – inTRApersonal, as in communicating with yourself. It is an entirely underestimated and underresearched area within the field of communication studies (where I originally come from). I actually wrote a brief 20-page university paper on intrapersonal communication back in 2003, but be warned, it’s in Danish. I might go more into this subject in a later post.

We all know that procrastination is a bad thing.

We really shouldn’t be doing it; putting problems off till tomorrow that we could be dealing with today. Action is good. Being proactive about things is even better. An immense amount of praise will flow towards the employees that are being proactive, dealing with issues before they become problems.

Now, there’s something slightly disconcerting about all this proactivity. Dealing with problems before they become problems? You have to wonder how many man-hours are being wasted on issues that were never actually going to become a problem in the first place. “Good news: I’ve been proactive about our polar bear problem” – “Err… are we going to have problems with polar bears?” – “Well, not now that I’ve been proactive about it, obviously.”

The fact is, when human beings are prone to procrastinate, it is because it surprisingly often works. I literally cannot count the number of problems I have solved by the simple action of ignoring them completely. Some problems solve themselves, given time. Other problems turn out not to be problems after all. And even real and persistent problems will sometimes get fixed by a person with a lower tolerance for impending doom.

Also, procrastination has a wonderful ability to make all of your other, slightly less unpleasant tasks seem positively rewarding in comparison. Would the desks of your employees ever get cleaned if it wasn’t for that nasty report they are trying to avoid getting started on? Mine wouldn’t. In fact, I’m normally quite productive when I am procrastinating, this post being a good example.

So, when to procrastinate, and when to be proactive? A general rule is that you should be proactive about an issue only when the cost of the proactive measures is lower than the cost of dealing with the problem later, timed by the probability that the problem will actually occur. Say, if you have a 50 percent chance of having to do a 10,000 dollar repair operation, you should be proactive about it only if the cost of being proactive is lower than 5,000 dollars. That way, you will tend to win out in the long run.

Of course, this rule works only when you can reliably estimate both the costs and the probabilities involved – and when there is a long run, i.e. when the issues that are on the line are not life-threateningly big for your company. If it is a one-off situation where a bad outcome will destroy your company, it may make sense to err on the side of caution, if nothing else than to atttain peace of mind.

I can practically never make my friends recognise the tunes I sing to them.

If I try humming the latest radio hit, I will receive perplexed looks from them, followed by general sniggering and good-natured ridicule. The explanation seems simple: my musical talents are not quite up to scratch. Well, that, or maybe, just maybe, my friends have been ganging up on me for years, conspiring to pull a massive practical joke (“Oh – it was Happy Birthday you tried to hum! Sounded like something from Wagner to me.”). The bastards.

Anyway, all became clear as I received this illuminating article from my good friend and academic brother-in-arms, Jonas Heide Smith, who has a blog of his own detailing the progress of his PhD thesis on cooperation and conflict in computer games (not related to the following).

“the music tapping study conducted by Elizabeth Newton (1990). Participants in her study were asked to tap the rhythm of a well-known song to a listener and then assess the likelihood that the listener would correctly identify the song. The results were striking: Tappers estimated that approximately 50% of listeners would correctly identify the song, compared with an actual accuracy rate of 3%. What accounts for this dramatic overestimation?

The answer becomes immediately apparent when one contrasts the perspectives of tappers and listeners, as Ross and Ward (1996) invited their readers to do when describing Newton’s results. Whereas tappers could inevitably “hear” the tune and even the words to the song (perhaps even a “full orchestration, complete with rich harmonies between string, winds, brass, and human voice”), the listeners were limited to “an aperiodic series of taps” (Ross & Ward (1996, p. 114). Indeed, it was difficult from the listener’s perspective to even tell “whether the brief, irregular moments of silence between taps should be construed as sustained notes, as musical “rests” between notes, or as mere interruptions as the tapper contemplates the “music” to come next” (p. 114). So rich was the phenomenology of the tappers, however, that it was difficult for them to set it aside when assessing the objective stimuli available to listeners. As a result, tappers assumed that what was obvious to them (the identity of the song) would be obvious to their audience.”


The above citation comes from a recent research article that documents what I call the Angry Email syndrome – basically, that people who read emails surprisingly often misinterpret the emotional tone of the message, and most often in a bad way. Read an abstract of the survey, or download the survey itself (in PDF).

I have an obsession with simple things.

Normally, we take pride in getting the complex stuff right. It is more glamorous, more prestigious; getting simple things right seems so mundane in comparison. The formulation of the grand overarching five year strategy traditionally occupy the finest minds in the company (or at least those with the highest pay level). The actual day-to-day implementation – making sure that the product will in fact work – well, that is more appropriate for lesser minds to deal with. There is little doubt that in the minds of most organisations, doing strategy is somehow ‘finer’ than doing implementation.

There is something fundamentally wrong with our obsession with complexity. I started thinking about this when I was a platoon commander in the army, participating in large-scale field exercises. There, I noticed that in 90 to 95 percent of the cases, when something went wrong, it wasn’t because of the complex stuff. The complex stuff received a lot of attention and careful advance planning, and had a decent success rate, all things considered. It was the simple stuff that went wrong. Somebody would confuse left and right, and botch up a major part of the exercise. Someone else would accidentally push the wrong button on the radio, so that the support divisions didn’t hear the attack order, with predictable results. Or a third somebody would mix up two numbers and end up calling an airstrike on his own headquarters (not a great career move).

To sum it up: most of the time, it is the simple things that go wrong. And this is really stupid, because the simple things are a lot easier to fix than the complicated things. Nobody can fully mastermind a global, multi-stage product launch, but we can make sure that the guy in the marketing division talks regularly with the guy in the sales division. We can’t predict all of the organisational changes that will take place because of our flashy new sales database system, but we can make sure that the user interface can be understood by the people who are to enter the data in the first place.

So, here’s my suggestion: for the next week or two, forget everything about the strategy of your company. Postpone your meetings with all those visionaries that want to tell you about the future. Instead, start focusing on the simple things, on the here and now. And don’t stop until you are sure that they work, and work well. Only then will it make sense to return to the higher spheres of planning, secure in the knowledge that your grand visions won’t founder on the shores of simplicity.

A mental framework that I have found useful is the distinction between proximate and distal causes – or, if you prefer, immediate versus ultimate explanations for things.

The best way to illustrate the difference is to consider the following question: Why do we have sex? A proximate (or near) explanation is simply to say ‘because we enjoy it’. Sex feels good, so we generally try to have it often.

This is true, but it is not complete. To fully answer the question, it is necessary to then ask ‘but why are human beings built so that we find sex enjoyable?’ The answer to this comes from evolution: the tendency to like and want sex is hardwired into our nature, because sex has been good (critical, actually) for the reproductive success of our ancestors. Those of our distant ancestors who didn’t have sex simply didn’t have offspring, and so never passed on their sex-hating genes. As it is, we are all descendants of people who went to great lengths to get sex, and thus managed to populate the world with their children; this is the reason why we like it.

This explanation is a so-called ultimate (or distal) explanation. It is what pops up when you keep asking ‘why’ to the first answer. Another, slightly different example of the same thing is taken from The Economist (I can’t remember which issue): Why does the water in a kettle boil? One cause could be “the water boils because heat is transferred from the hot stove to the kettle”. A completely different explanation is to say “because I wanted a cup of tea”.

The point is that there can be a hierarchy of causes for things, and that those causes are not necessarily mutually exclusive. It all depends on what you are trying to do when you are posing the question.

In the scientific study of happiness, a particularly interesting finding is that people quickly adjust to new-gained wealth – even major increases in income or life quality have only a passing effect on your basic happiness level. Lottery winners are in heaven for a month or two, and then it’s back to feeling averagely happy (or unhappy) again.

This universal phenomenon is called the hedonic threadmill. The hunt for happiness is a futile endeavour, at least if the goal is to become happy. We think that happiness will be ours when we have a private jet plane, but once we get it, the goalposts move once again, and we realise that true happiness comes only when we have two personal jet planes. And so on.

Interestingly, according to Daniel Nettle’s book Happiness: The Science Behind Your Smile, there is a sound evolutionary explanation for this. Our drive for happiness is nature’s way of keeping us striving to improve our lives. If it was easy to become happy, or if the effect was permanent, we would still be sitting in our caves, supremely happy because, hey, we have a cave to sit in. No reason to strive for higher things when you have a nice cave to sit in. No bears in it, too. Being unhappy, however, is a call for action: it makes us try to improve things. The human propensity to continually search for more happiness is nature’s way of keeping us on our toes, ever looking for ways to do better.

On a side note, Denmark – a puny yet curiously wonderful nation of which I am a proud member – has recently been found to be the happiest country in the world. It must be all those girls biking around in summer dresses.

See my review of Nettle’s book on happiness.