Artificial Dumbness

On Friday 7th February, around 30 Year Seven and Eight students took part in our third breakfast talk in the Rumble Museum Future Season series. Dr Robert Esnouf, Head of Research Computing at the Wellcome Centre for Human Genetics & Director of Research Computing for the Big Data Institute, University of Oxford, visited to talk about the future of A.I. in talk which he entitled “Artificial Dumbness”.

Picture2(2)

Robert began by introducing a bit about his formative years. He attended Gosford Hill School in Kidlington and while there, the school got their very first computer! No one knew how to use it, but he became interested and taught himself. He left school at 17 to work for a company in Birmingham before attending Oxford University to read Chemistry. He then did a doctorate in Biochemistry. Ten years ago, he left Structural Biology to start to organise computing in Human Genetics, and three years ago, he also took on the role at the new Big Data Institute.

20200207_083424

He then spoke about how Ada Lovelace first pioneered the idea of a computer, but this early idea of a computer was simply a system that would perform tasks. Since then, we have started to see the rise of what has become known as ‘Artificial Intelligence’, where computers learn new things themselves from data.

Picture1(2)

However, A.I. can be caught out and makes lots of mistakes, because of the way in which it operates. He showed a diagram of an A.I. neural network, which involved an input, two ‘hidden layers’ of processing, and then an output. He explained that we don’t actually know what is going on in these hidden layers, and that A.I. therefore looks at things in a very different way from a human brain. He gave examples of an A.I. recreating a picture of Rembrandt, which was difficult to spot unless you were an art expert, and also an example of an A.I. who wrote a Harry Potter book, which was very easy to spot!

_110246786_facialrecognition3.gif

He then moved on to more serious examples of A.I. getting things wrong, for example, when trying to identify faces on social media. In one example, a man was not allowed to get a passport because the A.I. incorrectly identified the photograph of the man as having his eyes closed. He also spoke about a deep data bias, whereby assumptions made by programmers about gender and race led to women and some ethnicities being under or mis-represented in various ways. He spoke about how in medicine, an A.I. had been used to identify people needing hip surgery based on scans, but that the A.I. had made selections based on the age of a person and the type of scanner they were using rather than the scan image itself.

He concluded that it was important to remember that computers are not to be trusted, and to be mindful of this as you use things such as social media in your daily life.

Thank you to Robert for taking the time to deliver such an interesting and thought-provoking talk, which was greatly enjoyed by the students and staff!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s