Ep 146: How AI Impacts Our Teens
Andy:
You've got a book called A Citizen's Guide to Artificial Intelligence. Talk to me about this topic, how this became an interest for you and why you think that a book is needed about this. Why is it important for people to know about, and why do people not know enough about it?
John:
I got into it quite by accident, actually. I originally trained as a lawyer, but not being terribly satisfied with that calling I shifted and fell into philosophy, cognitive science, and linguistics. And I did a PhD in that.
John:
Then when I went on the academic job market, there was this interesting job ad in New Zealand, of all places. I was living in Australia at the time. And the job ad specified that the ideal candidate would have a background in either machine learning, cognitive science, or computer science, and law or politics.
John:
So this made for a really interesting Venn diagram. And I don't think that many people had both of these backgrounds intersecting, but I did. Initially I was skeptical. I thought, "Well, I've just done all this work on cognitive science and the brain and language. Looking at technology, that seems like I'm changing careers yet again."
John:
But it turned out to be a really good decision, and that was back in 2017. Since then, this topic, this area of AI, and AI and society, and AI and politics, and AI, AI, everywhere, has just completely expanded beyond anything that I could really imagine.
John:
It is just so of the moment. And the pace, if anything, has simply accelerated. Both from the point of view of research discoveries, the papers that are coming out, new applications coming out all the time, new breakthroughs being made by the most advanced machine learning systems. There is just this demand for this area.
John:
And so that's why I decided to progress with it. Then after my little stint in New Zealand, I went to Cambridge, and that's where I am now. And within two weeks, I'm starting my new position at Oxford pursuing the same topic. So that's how I fell into it.
John:
The reason why it is important for people to know about it is because I think, along with global warming, climate change, I think one of the forces that is most going to shape our lives in this coming century is going to be the advent of increasingly sophisticated machine learning technologies.
John:
For those of your audience who are older than, let's say, 30 years old, they will be able to attest to the transformation that the world has gone through in the past two and a half decades. Just before dial-up internet came, that generation who were there before dial-up internet.
John:
And now you compare that world to the world we inhabit today, and it's almost changed beyond recognition. Just the way that we interact with our environment, the way that we use the objects around us to get our life done.
John:
Once upon a time, you might've written a shopping list. Once upon a time, if you arranged to meet someone in town, you would have had to be very specific about where and when you would meet them. There was no chance that you could just text someone and say, "Hey, here I am." So the world has just changed, and that pace of change is only going to increase.
John:
This stuff is here to stay. So that's why people need to know about it, because it turns out it has lots of tentacles, and it affects lots of things. Politics, our day-to-day social lives, our engagements in work, our professional... It's got lots of tentacles and lots of repercussions.
John:
So citizens just need to get on top of the main issues, in the same way that your average citizen knows stuff about global warming. They might not know much about meteorology and geography and oceanography. They might not be able to tell you what the latest climate models are.
John:
But everybody's got a basic level of understanding about global warming, and that allows them to participate meaningfully in the democratic process. So the goal of this book is to try to get that level of education up a few notches for everyone.
Andy:
Where are we at right now in terms of AI? Because doesn't it seem like, when you try to talk to Siri, she just doesn't really know what you're talking about. And she's like, "Hey, I searched Google for you. And here's what I found." And it's like, "No, I wanted you to order me a cappuccino from Starbucks. Never mind. I'll just call them myself."
Andy:
I guess it seems like AI is maybe... I guess it's pretty good at recommending you a new song on Pandora based on what you had listened to in the past, but not so good at really understanding your intent or having a full conversation with you. So where are we exactly, and where are we going?
John:
That's an excellent question. Your audience, no doubt, will be bombarded by lots of messages coming from the press, the media, TV, CNN, PBS News, along the lines of, "There's something to worry about here. AI is going to take over. It's going to take over our jobs. It's going to perform all the surgeries for us in the future. We won't need hairdressers anymore."
Andy:
All the self-driving cars.
John:
Self-driving cars, yeah.
Andy:
Flying cars that drive themselves...
John:
Yeah, yeah. So on the one hand, there is this onslaught of information that tends to be apocalyptic, and tends to, well, frankly, exaggerate the current potential of AI technology.
John:
But then there's this other lived experience that we all have of AI, such as it exists at the moment in the form of our smart phones and smart devices and so forth, which is anything but smart. It's insanely-
Andy:
It feels kind of rudimentary.
John:
--and infuriatingly stupid. Both of these things are going on.
John:
Traditionally, AI comes in two flavors, right? There's what's called, I don't know if you might've heard of this, there's what's called weak AI. And there's strong AI. Now weak AI is the AI that we've got now. This is AI that can basically do one thing, and do it pretty well. Okay? But it can't do anything else.
John:
So yeah, you can have a system that is uncannily good at predicting what you would like to purchase on Amazon in light of what you have purchased before. You've got satellite navigation software, and all sorts of things that are used in governments, that are used by bureaucracies, that help day-to-day administration of large states and local governments and councils.
John:
So there's that kind of AI. And this is all, as I said, weak AI. It does one thing, and it does it pretty well, but it can't do anything else. The Holy Grail-
Andy:
Or like, when you try to go off script, it gets really confused. Sometimes you call into customer service, and it's an automated system. And as long as you're still, "Yeah, I want to make a reservation. Yup. 3:00 PM. Thursday. Yup. That sounds good. Okay. Great." Everything is fine, and it totally understands what you're saying.
Andy:
But as soon as you're talking about like, "Okay, now, do you guys do the dyes and the cuts there? And would you also do a shave, or do I need..." It gets confused. "Wait now, so are you guys located on this block? Or would I go past the Jamba Juice in order to get there? And then which way do I turn?" It's confused already. It doesn't know what's going on.
Andy:
So it's like, Yeah, I can handle things in a narrow sort of avenue that it has been trained to do. And not really when you try to throw it a curveball.
John:
That's it, that's it. The Holy Grail of AI research is not weak AI. It's what has traditionally been called strong AI. And this is the kind of AI that does all of those other things we were talking about, but everything else too.
John:
Basically, that does what a human being can do. A human being can play tennis, then play chess, then go and add up arithmetic calculation. Then go and engage in some other fun social activity and interact with someone else, engage in conversation by saying things that are appropriate to the context. We take it for granted.
John:
But things like, if I told you, "Could you go to the shop and pick up some milk?" You would know exactly what I meant by that. But it turns out to be extremely difficult to program a computer to understand that in a way that makes sense, the way we think it makes sense.
John:
Because with a computer... I mean the most logical language would have the computer do something like, an embodied machine, a robot, go to the shop, pick up milk, and the job's done. Because that's the shortest line between those two points. That's the most direct logical sort of language to express the idea in.
John:
That's the Holy Grail of AI research, to get systems that can do things that are adaptively fluid and intelligent and flexible the way human intelligence is. And we're just nowhere near that.
John:
So to answer the question finally, where are we at with AI? We're basically at the frontier of weak AI. We're pushing the boundaries of weak AI, but we're nowhere really much closer to reaching the other objective.
Andy:
And so, there's all the futurists and the Ray Kurzweils out there saying we're going to be maybe how far away from getting to the strong AI? Are we talking 20 years? Are we talking 10 years? Are we talking 50 years?
John:
I'm not in a position to say, but I wouldn't guess that it would become a reality any sooner than 100 years. Yeah, but maybe that's really pessimistic. I know others think that by 2050, 2040 even, we might see something like maybe a conscious AI. But I'm skeptical.
Andy:
Okay. Well regardless, it's definitely a big part of our lives. It's changing the way that we interact with technology and with each other, and it's not going anywhere.
Andy:
So it makes me wonder, how do we push our teenagers to develop skills that are going to be relevant, and interests that are going to last? That are going to still matter as AI kind of takes over and approaches the strong AI?
John:
Yeah, that's a good question. Let me bracket the issue of strong AI, because there are enough interesting issues for parents that arise with regard to weak AI as it is. So let's just bracket the strong AI for the moment.
John:
If I were a parent, and I'm not, but if I were a parent there's a couple of, I suppose, basic points that I would have in my mind about trying to steer my child, or at least guide my child in the path that will lead them to something good and wholesome for their future. A career that they can get something out of and that will last as long as they're happy to stay in it.
John:
Some really basic things. Your child should end up doing, ideally, what they're good at doing, and what they enjoy doing. And generally, if you're good at doing something, you tend to enjoy it. So that's a cardinal principle that will stand the test of time.
John:
Now with that in the background, then we've got this reality that more and more of the, let's call them process-driven tasks, process-driven jobs. Anything that can be broken down into small parts.
John:
Any task that can be broken down into smaller tasks, and that doesn't require too much in the way of individual human discretion or judgment, right? So it's more formulaic. Any task that's of that character will be increasingly automated as we go forward.
John:
With that in the background, what does that leave? Well, I think what that does is, it means that the roles that had traditionally been roles performed by women. So the caring professions, teaching, nursing, counseling, those sort of roles, are not really in danger of being automated.
Andy:
Interesting.
John:
The jobs that are, are the ones that now have traditionally been performed by men. Banking, finance, lots of parts of engineering. Even lots of aspects of law, as well. A lot of that can be automated and made to fit into a kind of mold where it's routine-based. It's iterative, it's recursive.
John:
So I think that's an interesting development that we're going to see, I believe, that we are going to see happen over the next say 20, 30 years. Whether this means that those professions that traditionally women have performed, in the caring professions, whether they will be remunerated better, is another question.
John:
There's good enough reason, I think, to believe that they won't be any better remunerated than they are now. Because generally, the way our economies tend to work is that you get paid more for a job, the rarer the job is, and the harder it is to perform. But caring roles are generally not rare. I mean, everyone has the capacity to be empathic, to feel compassionate, because that's just who we are as humans.
John:
So there's an issue there. I'm not sure that we will see a recalibration of wages in the labor market. But what I think will probably happen is that more men will be flocking to those traditionally women jobs, just because so much of the traditional male jobs will now be performed by machines.
John:
To bring it back to what you do with your children, always you're navigating by the star of your child's own innate interests, and what they're good at. I would try to steer them in something that I think had a future. And the less process-driven and formulaic and algorithmic a job, the more chance it will survive into the future.
Andy:
I hear people talking like, "Aren't the jobs of the future going to be that we need people to manage the AI? We're all going to be sitting around and programming the AI, and going through and watching what it's doing. The factory people of the future aren't actually doing the work themselves, they're using the software and managing all of that."
Andy:
So are we wanting to master AI ourselves, or push our kids into fields that are related to AI or education that will make them proficient in that? Or is that a losing battle?
John:
It is a very good question. We can all agree that reading, writing, and arithmetic are just the basic solid foundation of any education. We don't even question it. That's just kindergarten, first grade, second grade, all the way through. That's just the foundation of an education.
John:
I think that we should have coding added to that. So what we should be teaching our kids is reading, writing, arithmetic, and coding. But then we don't aspire in life to become professional readers, or people that perform arithmetic as a profession.
Andy:
Arithmetic-ers.
John:
Yeah, yeah, right! These are just taken to be the basic building blocks of living in a complex civilized society. And coding should be in the same category.
John:
On the one hand, what I'm saying is that we all need to be better at coding, and we need to have that knowledge imparted to our children. On the other hand I'm saying, don't think of a career in terms of being able to code. Think of every career as somehow involving an element of coding, or in which being able to code becomes handy at times.
Andy:
Beneficial. Yup.
John:
And that then means that we're asking a different question. We're not really interested in having a society where everybody does computer science, and where the universities are just spewing out computer science graduates.
John:
We're thinking more in terms of, what are the jobs that will survive? To what extent will they involve formula and process-driven work? Let's try to encourage our kids to develop their talents in that direction, and then we'll have coding as part of the curriculum, come what may.
John:
The reason why I'm skeptical about encouraging more students than necessary to engage in IT work just because we're on the cusp of this machine learning revolution, is because if you think about it, once you have an entire, say, production facility computerized... Honestly, compare that with the same production facility that's staffed by humans.
John:
Now you said that we'll need people to maintain the computers. Sure. But how many people do we really need to be able to maintain a production facility? I'm willing to bet that if you had a factory with 50 people doing the production, and then one where the computer did everything, you might only need two, three, or four technicians. The ratio of maintenance guy or gal to computer is a lot lower.
Andy:
What about this idea of bias? Can AI be biased? And I guess specifically with regard to teenagers, how would that matter?
John:
If you look at who is doing IT and artificial intelligence and machine learning, you look at the industry and the composition of the industry, it's overwhelmingly white, straight men. Overwhelmingly. As for a whole series of demographic and sociological factors if that's the case, but that's the way it is.
John:
It's unsurprising to learn that these technologies that are being developed by straight, white men, are going to be created in the mold of straight, white men. They're going to be tested on straight, white men. They're going to reflect the assumptions that straight, white men have about the world.
John:
I mean, obviously there's no one abstract thing called a straight, white man. We're all very different. But if you include other ethnicities, other cultures, other sexualities, other genders. If you include as many types of human in the mix, you'll get a very different kind of technology. One that reflects the assumptions of all of these different people.
John:
And it's become sort of a notorious fact about AI tech that it tends to perform really well... You name it, whatever technology we're talking about, it will tend to perform very well on a straight, white male, or at least a white male. And then as you deviate from that sort of type, it degrades. Performance degrades.
John:
So by the time you get to an Asian woman, or a transgender individual, it starts really degrading. It doesn't know what it's dealing with. An example would be, a classifier that's meant to recognize your face, or is meant to label someone for some characteristic. Maybe let's just say that it's meant to recognize whether you're old or young, or whether you're a woman or a man, not sure. Whatever the criterion might be.
John:
Well, if you give that system a white man, it will probably get the right answer, whatever the classification it's doing. And if you derogate from that, you tend to get increasingly wrong answers.
John:
There's a great documentary on Netflix at the moment called Coded Bias. And it's a fantastic summary of all the issues with AI about bias. Really, really good.