Date posted: 14/02/2020 5 min read

Genevieve Bell: Casting doubt on our AI fears

Anthropologist turned technology storyteller Genevieve Bell is looking at what artificial intelligence can and can’t yet do.

In Brief

  • The idea of artificial intelligence (AI) is 60 years old, but it’s a mistake to think AI is the same as a human brain, says the ANU’s Professor Genevieve Bell.
  • AI augments rather than replaces human intelligence, Bell says.
  • Types of AI such as machine learning are backwards looking and not creative.

She’s an anthropologist who specialised in native American ethno-history and then went to work at Intel, before returning to Australia in 2017 to establish the 3A Institute [Autonomy, Agency and Assurance Innovation Institute] at the Australian National University. These days, Professor Genevieve Bell enjoys telling stories of technological history – and explaining what it tells us about artificial intelligence (AI).

In 2020, the phrase “artificial intelligence” often smacks more of threat than of progress. Worried commentators speak of mass unemployment, war-fighting robots, mass surveillance and even the prolonging of discrimination by robots trained to copy existing human behaviour.

Jokesters refer light-heartedly to “our new robot overlords”. In 2014, billionaire innovator Elon Musk labelled AI “our biggest existential threat”. Even if AI does not become sentient and seek to destroy us, we worry that it may make us obsolete.

Enter Genevieve Bell, informed and sceptical. At Swinburne University’s Chancellor’s Lecture in August 2019, she took on artificial intelligence in a lecture called “Wonder in the age of AI”. Her message: The idea of artificial intelligence is more than 60 years old, and people should have hopes for it rather than just fears.

The first burst of enthusiasm for AI

Genevieve BellPicture: Genevieve Bell.

Bell starts her story with the very first computers, including Australia’s CSIRAC. From the start, even the smartest of humans made the mistake of considering them akin to giant human brains.

That thinking carried into the famous 1956 Dartmouth Summer Research Project on Artificial Intelligence, generally agreed to be the first event to consider AI. Mathematician John McCarthy seems to have coined the term “artificial intelligence” specifically for the eight-week gathering.

The select list of Dartmouth attendees included electrical engineer Claude Shannon (later dubbed the father of information theory), mathematician Marvin Minsky (who pioneered neural networks); and economist and psychologist Herb Simon (a leader in decision-making theory).

But “artificial intelligence” turned out to be more than a slight overstatement of these early computers’ abilities. While solving problems, they started teaching the early computer scientists how complex human intelligence really is. Computers are not human brains, but something else.

The Dartmouth pioneers and their colleagues, Bell notes, quickly began trying to teach a computer to translate Russian, an important task at the height of the Cold War. They hoped, she says, for “instantaneous translation”. Instead, “they came to understand that language wasn't as simple as words... that words and meaning had a complicated relationship.”

“They tell the story of trying to teach the machine the phrase, ‘the spirit is willing and the flesh is weak’. And the translation came back: ‘the meat is bad, and the vodka is strong’.”

“They tell the story of trying to teach the machine the phrase, ‘the spirit is willing and the flesh is weak’. And the translation came back: ‘the meat is bad, and the vodka is strong’.”
Genevieve Bell

That first burst of AI enthusiasm petered out in the late 1960s and early 1970s, in part because a 1966 US government report concluded that people translated languages more cheaply, quickly and accurately than machines. AI has experienced more of these bursts of enthusiasm over the years, Bell says, followed each time by an “AI winter” as interest wanes and money dries up.

How AI is working today

Today, most of the AI currently talked about is a set of techniques known as “machine learning” – essentially, computer programs containing formulas to take a pile of data and find relationships within it.

For instance, we can feed a computer a stack of images labelled as cats and dogs and tell it to analyse them – and it can, in a limited way. It may then be able to sort a bunch of new photos into the same two categories, cats and dogs. And when we tell it where it has gone wrong, it can “learn” from its mistakes, and perform better on the next batch of photos.

With ample computer power, such techniques really have yielded results over the past decade. Machines can now not just separate pictures of cats and dogs, but accurately divine which of your children are in a particular family photograph. The late 1950s’ dreams of machine translation have become reality in programs such as Google Translate.

But the machine still does not understand what it is translating. AI still has very obvious limits.

The limits of recommendation engines

Those limits show through, Bell suggests, in the AI built into the “recommendation engines” of content services such as Amazon, Netflix, Google, and even your bank. Such services attempt to predict what you will like based on your history – what you have liked in the past.

Behind their recommendations, Bell observes, is “the notion that the familiar is good – so if you like this, let's find things that are like it, and things that are a little bit further away from but still kind of like it.”

That attempt at understanding falls short. Traditionally, content industries have had professionals who could see how tastes were evolving, Bell says.

AI’s tastes are inherently more conservative, because it can only look backward. Humans go through a process she describes as “familiar, familiar, familiar, starting to get a little bit bored and irritated, still familiar, desperately hoping for something else and you're still giving me familiar.” AI cannot respond to that.

Bell’s hope is that we might someday have AI that created something new – “that created a possibility of wonder, not simply reproducing the taste of other people that came before them”.

The AI seduction

We’re now again in what some analysts call an “AI spring”, with enthusiasm high and money flooding into the field.

With AI again on the rise, Bell suggests, it is easy to be seduced by the notion of the artificial intelligence agenda – “the idea that machines will understand language, that machines will understand abstract concepts, do things that we only once knew how to do, and maybe learn for themselves.” After all, some of that is already happening.

But the dream of machines that understand what we say, that can deal in abstract concepts, remains just a dream. Bell warns it is tempting to jump from the possibility of machines that learn to “imagining that technology and humans have an oppositional relationship.” She argues that it is possible to imagine instead that the relationship can be “both an augmenting relationship and a collaborative one”.

Read more:

4 things you didn’t know about algorithmic bias

Artificial intelligence can sort and analyse data at high speed, but here’s why it’s important to ask: ‘Is my AI racist?’

Read more on bias in AI

AI at an ethical crossroads

In a world where robots will one day write code for themselves, the rise of artificial intelligence and machine learning creates new ethical dilemmas, according to a paper by CA ANZ.

Read more