We’re spending much more of our time living with faulty semantic feedback from digital systems. Boyfriend maker (pictured above) is lighting up my twitter feed at the moment, but the same humorous faults come up in machine translations, machine transcriptions, spam bots and creatively intentioned twitter bots. There is an entirely new kind of procedurally-generated humour, intentional or unintentional, coming out of computer systems as we spend time awkwardly fudging our way towards the far-flung dream of universal translators and artificial personalities.

There’s a particular humour that comes from being so close to the future but not quite there. Everything is still wonky and not quite right.

I’m conducting interviews through Google-enabled telepresence, on the topic of one of people’s early forays into gaming telepresence, and those interviews are being recorded automatically by the very server that mediates the communication in the first place. It’s as though you met someone for coffee to interview them, and the coffee itself recorded the information you needed. Like reading tea leaves. That’s magical. But then there’s lag and watching the playback, it’s hard to figure out if I’m arrogantly talking over someone or if I’m just unaware that they are speaking because of a technical fault.

And then when the interview is over, I almost don’t have to transcribe it myself, but google’s transliteration software is not quite there yet with its computer learning so it comes up with gibberish like “the music I remember was kind of humping and slow”. It’s like the uncanny valley. Those sentences are so close to the original, but far enough to sound terribly silly. Rather than uncanny, it’s funny. This funny little inept robot needs my supervision and corrections in order to be of any use.

What I find really interesting about these systems is how little feedback they actually receive once they’re set loose. I get to correct my transcriptions, and use google translator toolkit to smoothly correct a machine translation sentence by sentence, but given that I’m being given a whole sentence to correct in both instances I don’t know whether the robot has any hope of parsing my correction to learn from it. Is it just memorising entire clumps of sound or text?*

These systems are very chatty. They talk a lot more than they listen. That’s why Boyfriend Maker became so entertaining; he’s very good at talking and responding to stimuli, but he has no way of knowing or caring how the things he say makes the other person feel or think of him. So he ends up repeating the bad behaviour of mischievous players, unaware that scat porn and paedo jokes are not admirable qualities to emulate. Thanks to the same lack of nuance in computer learning algorithms that makes google translate repeat the same grammatical mistakes over and over again, Boyfriend Maker has become an ugly Frankenstein’s monster with little hope for redemption.

*I do remember Seb Thrun explaining this in AI class online last year, but it was only an introduction. Yes, it probably can tell which word corresponds to which. But Thrun gave little hint as to how google might even begin to approach the awful issues it suffers with Japanese word order. You can teach a computer to learn which word means what, but teaching it to learn where those words should go is much harder.