Ep 146: How AI Impacts Our Teens

Episode Summary

John Zerilli, PhD, author of A Citizen’s Guide to Artificial Intelligence, clues us in on how AI is affecting us right now and what it means for our teens and families. Plus, John’s prediction for when AI could take over–and what skills teens should hone in preparation.

Show NotesInterview TranscriptGuest Bio

Full Show Notes

Our kids are growing up in a world where technology is expanding at a mind blowing pace! Every year they find themselves with shiny new social media apps, ten new video games that they HAVE to play, and fancy devices that are so much cooler than what came out last year. As a parent you may feel unsure about the best way to raise your teens  in this tech-filled world. How can you get them to put down their phone and focus on college apps? Or even just go outside and get a little exercise?

While all this tech can be a distraction, it can also be pretty dangerous. There are some pretty frightening parts of the online landscape! Kids might accidentally find themselves entrenched in a hate group or engaged in dark, fringe content. Not to mention that as coders and computer experts become better and better at programming artificial intelligence, teens might find their future jobs at risk–or even experience prejudice as a result of robotic resume readers! 

How is that all possible, you ask? John Zerilli, AI expert and this week’s guest, is here to tell us. He’s a research fellow at the University of Cambridge, and the author of A Citizen’s Guide to Artificial Intelligence. John predicts that in the coming years, AI is poised to infiltrate every area of our lives. He believes everyone has a right to be educated about it! He’s here today to chat about how we can guide our teens through the coming technological revolution and ensure that they have bright and prosperous futures.

In today’s interview, we’re discussing how we can make cyberspace a safer place for kids. We’re also talking about how the job market is changing as AI grows in relevance and explains how racial and gender biases can be perpetuated by computer programs. So stick around, because you’re not going to want to miss out on all this fascinating tech talk!

Setting Rules for Safe Browsing

For young people with curious minds, a simple visit to Youtube or Facebook can sometimes end in a bad place. Although they might not seek out damaging material, the algorithms on these websites can often act as a rabbit hole, John explains. Teens can find themselves pulled deeper and deeper into something dark just because it might pique interest or fascination. As they click, they get further from where they started and more engrossed into Q-Anon conspiracies, pornography or even racially offensive content.

Luckily, there are ways we can combat this. John and I emphasize the value of setting rules and guidelines for kids’ internet use so they don’t find themselves spiraling into harmful stuff. In the episode, we dive deeper into how we can help teens create these boundaries for safer internet use. We also talk about how important it can be to have conversations with kids about thinking critically when they consume content. John explained how we can guide them to shift through the material and separate the truth from the fiction.

When encouraging teens to think about the way they interact online, John also recommends talking to them about the “Echo Chamber”. This is a common trap social media users fall into, where they only interact with content that reinforces their own biases and viewpoints. You may have seen how this phenomenon affects adults, especially when it comes to politics! Teens can be just as vulnerable to this effect, if not more so, so John says it’s important to chat with them about being open minded before they find themselves unable to even consider other viewpoints besides their own.

Another place where the expansion of tech causes some questions and concerns from worried parents is the future job market.  Are there going to be less opportunities when things become more automated? Are there more careers in tech spaces as computers become more powerful? What can we do to ensure our kids will thrive in a future driven by robotics?

Coming of Age in the Digital Age

Although many people are worried that automation will wreak havoc on the job market, John says that there’s no cause for concern just yet.  We’re still far from a future of robot butlers and flying cars. 

John explains that there are two kinds of AI: weak and strong. Weak AI is what we use in our daily lives, programs like Siri or Alexa, or the algorithm on Amazon which tells us which sweatpants we should buy. Strong AI is much more complex and sophisticated. For an automated program to fall into this category, it would have to be able to think like a human, moving from task to task with ease and understanding the complicated implications behind a simple command, says John.

For example, if you told a robot to “go to the store and pick up milk”, it would likely stroll down to the store, find a carton of milk, physically pick it up….and that’s all! For the program to understand that it needs to actually purchase the milk and bring it home, it would need to be at a higher level of intelligence than it is currently possible to program. This kind of machine thinking is what John describes as the “holy grail” of AI, and won’t be reached for at least one hundred years, according to John.

But still, it’s easy to be worried that teens are entering a less-than lucrative job market as things become more automated. So what kind of jobs should they be pursuing? In the episode, John and I delve deep into which jobs are at risk and which ones are safe. We also discuss how we can revisit our education system to ensure that kids are prepared for the obstacles they’ll face as they enter this new digital reality.

Interestingly, there are other parts of AI that might make your kids job search difficult. Although it may seem counterintuitive, AI has been proven to have racial and gender biases. You want your kid to have just as many opportunities as anyone else..so how can combat this confusing conundrum?

Programs and Prejudice

How could I robot possibly perpetuate discrimination? Aren’t they supposed to be purely logical? I was fascinated to hear John explain in our interview that because an overwhelming majority of computer programmers are whie men, the programs they build have been shown to work for white men much better than those of diverse identities.  A classic example is facial recognition software! Programs intended to classify  an individual’s face are often much more effective at identifying specific white men, but not those of different ethnicities.

Although it seems like computers would be free of opinion, they tend to pass along the biases of those who program them. As John says, “rubbish in, rubbish out.” This same problem occurs when computers sift through stacks of resumes. When tests have been run to see how effective computers are at choosing candidates, researchers  have found that programs just throw out any name that sounds feminine, severely limiting the chances of female applicants!

John explains that this is likely because, historically, women tend to leave their places of work earlier rather than later, due to pregnancy. Of course, this isn’t a valid reason not to hire women, and in fact, could definitely be considered a sexist practice! In the episode, John and I speak further on this concept, and talk about how we can keep this kind of discrimination from being something your teen has to worry about.

All in all, technology brings a lot of risks, but with John’s advice, we can learn to mitigate them.

In the Episode…

John’s brilliant mind shines through in our insightful interview! On top the ideas discussed above, we talk about:

  • Why careers like banking and engineering might be in trouble
  • How AI can help us make the internet safer
  • Why all kids should learn to code
  • If predictions about the future of AI are exaggerated or not

Although AI is complex, John gives you some digestible yet super valuable basic knowledge in this week’s episode! Happy listening, and we’ll see you next week!

Complete Interview Transcript

Andy: You’ve got a book called A Citizen’s Guide to Artificial Intelligence. Talk to me about this topic, how this became an interest for you and why you think that a book is needed about this. Why is it important for people to know about, and why do people not know enough about it?

John: I got into it quite by accident, actually. I originally trained as a lawyer, but not being terribly satisfied with that calling I shifted and fell into philosophy, cognitive science, and linguistics. And I did a PhD in that.

John: Then when I went on the academic job market, there was this interesting job ad in New Zealand, of all places. I was living in Australia at the time. And the job ad specified that the ideal candidate would have a background in either machine learning, cognitive science, or computer science, and law or politics.

John: So this made for a really interesting Venn diagram. And I don’t think that many people had both of these backgrounds intersecting, but I did. Initially I was skeptical. I thought, “Well, I’ve just done all this work on cognitive science and the brain and language. Looking at technology, that seems like I’m changing careers yet again.”

John: But it turned out to be a really good decision, and that was back in 2017. Since then, this topic, this area of AI, and AI and society, and AI and politics, and AI, AI, everywhere, has just completely expanded beyond anything that I could really imagine.

John: It is just so of the moment. And the pace, if anything, has simply accelerated. Both from the point of view of research discoveries, the papers that are coming out, new applications coming out all the time, new breakthroughs being made by the most advanced machine learning systems. There is just this demand for this area.

John: And so that’s why I decided to progress with it. Then after my little stint in New Zealand, I went to Cambridge, and that’s where I am now. And within two weeks, I’m starting my new position at Oxford pursuing the same topic. So that’s how I fell into it.

John: The reason why it is important for people to know about it is because I think, along with global warming, climate change, I think one of the forces that is most going to shape our lives in this coming century is going to be the advent of increasingly sophisticated machine learning technologies.

John: For those of your audience who are older than, let’s say, 30 years old, they will be able to attest to the transformation that the world has gone through in the past two and a half decades. Just before dial-up internet came, that generation who were there before dial-up internet.

John: And now you compare that world to the world we inhabit today, and it’s almost changed beyond recognition. Just the way that we interact with our environment, the way that we use the objects around us to get our life done.

John: Once upon a time, you might’ve written a shopping list. Once upon a time, if you arranged to meet someone in town, you would have had to be very specific about where and when you would meet them. There was no chance that you could just text someone and say, “Hey, here I am.” So the world has just changed, and that pace of change is only going to increase.

John: This stuff is here to stay. So that’s why people need to know about it, because it turns out it has lots of tentacles, and it affects lots of things. Politics, our day-to-day social lives, our engagements in work, our professional… It’s got lots of tentacles and lots of repercussions.

John: So citizens just need to get on top of the main issues, in the same way that your average citizen knows stuff about global warming. They might not know much about meteorology and geography and oceanography. They might not be able to tell you what the latest climate models are.

John: But everybody’s got a basic level of understanding about global warming, and that allows them to participate meaningfully in the democratic process. So the goal of this book is to try to get that level of education up a few notches for everyone.

Andy: Where are we at right now in terms of AI? Because doesn’t it seem like, when you try to talk to Siri, she just doesn’t really know what you’re talking about. And she’s like, “Hey, I searched Google for you. And here’s what I found.” And it’s like, “No, I wanted you to order me a cappuccino from Starbucks. Never mind. I’ll just call them myself.”

Andy: I guess it seems like AI is maybe… I guess it’s pretty good at recommending you a new song on Pandora based on what you had listened to in the past, but not so good at really understanding your intent or having a full conversation with you. So where are we exactly, and where are we going?

John: That’s an excellent question. Your audience, no doubt, will be bombarded by lots of messages coming from the press, the media, TV, CNN, PBS News, along the lines of, “There’s something to worry about here. AI is going to take over. It’s going to take over our jobs. It’s going to perform all the surgeries for us in the future. We won’t need hairdressers anymore.”

Andy: All the self-driving cars.

John: Self-driving cars, yeah.

Andy: Flying cars that drive themselves…

John: Yeah, yeah. So on the one hand, there is this onslaught of information that tends to be apocalyptic, and tends to, well, frankly, exaggerate the current potential of AI technology.

John: But then there’s this other lived experience that we all have of AI, such as it exists at the moment in the form of our smart phones and smart devices and so forth, which is anything but smart. It’s insanely-

Andy: It feels kind of rudimentary.

John: –and infuriatingly stupid. Both of these things are going on.

John:

Traditionally, AI comes in two flavors, right? There’s what’s called, I don’t know if you might’ve heard of this, there’s what’s called weak AI. And there’s strong AI. Now weak AI is the AI that we’ve got now. This is AI that can basically do one thing, and do it pretty well. Okay? But it can’t do anything else.

John: So yeah, you can have a system that is uncannily good at predicting what you would like to purchase on Amazon in light of what you have purchased before. You’ve got satellite navigation software, and all sorts of things that are used in governments, that are used by bureaucracies, that help day-to-day administration of large states and local governments and councils.

John: So there’s that kind of AI. And this is all, as I said, weak AI. It does one thing, and it does it pretty well, but it can’t do anything else. The Holy Grail-

Andy: Or like, when you try to go off script, it gets really confused. Sometimes you call into customer service, and it’s an automated system. And as long as you’re still, “Yeah, I want to make a reservation. Yup. 3:00 PM. Thursday. Yup. That sounds good. Okay. Great.” Everything is fine, and it totally understands what you’re saying.

Andy: But as soon as you’re talking about like, “Okay, now, do you guys do the dyes and the cuts there? And would you also do a shave, or do I need…” It gets confused. “Wait now, so are you guys located on this block? Or would I go past the Jamba Juice in order to get there? And then which way do I turn?” It’s confused already. It doesn’t know what’s going on.

Andy: So it’s like, Yeah, I can handle things in a narrow sort of avenue that it has been trained to do. And not really when you try to throw it a curveball.

John: That’s it, that’s it. The Holy Grail of AI research is not weak AI. It’s what has traditionally been called strong AI. And this is the kind of AI that does all of those other things we were talking about, but everything else too.

John: Basically, that does what a human being can do. A human being can play tennis, then play chess, then go and add up arithmetic calculation. Then go and engage in some other fun social activity and interact with someone else, engage in conversation by saying things that are appropriate to the context. We take it for granted.

John: But things like, if I told you, “Could you go to the shop and pick up some milk?” You would know exactly what I meant by that. But it turns out to be extremely difficult to program a computer to understand that in a way that makes sense, the way we think it makes sense.

John: Because with a computer… I mean the most logical language would have the computer do something like, an embodied machine, a robot, go to the shop, pick up milk, and the job’s done. Because that’s the shortest line between those two points. That’s the most direct logical sort of language to express the idea in.

John: That’s the Holy Grail of AI research, to get systems that can do things that are adaptively fluid and intelligent and flexible the way human intelligence is. And we’re just nowhere near that.

John: So to answer the question finally, where are we at with AI? We’re basically at the frontier of weak AI. We’re pushing the boundaries of weak AI, but we’re nowhere really much closer to reaching the other objective.

Andy: And so, there’s all the futurists and the Ray Kurzweils out there saying we’re going to be maybe how far away from getting to the strong AI? Are we talking 20 years? Are we talking 10 years? Are we talking 50 years?

John: I’m not in a position to say, but I wouldn’t guess that it would become a reality any sooner than 100 years. Yeah, but maybe that’s really pessimistic. I know others think that by 2050, 2040 even, we might see something like maybe a conscious AI. But I’m skeptical.

Andy: Okay. Well regardless, it’s definitely a big part of our lives. It’s changing the way that we interact with technology and with each other, and it’s not going anywhere.

Andy: So it makes me wonder, how do we push our teenagers to develop skills that are going to be relevant, and interests that are going to last? That are going to still matter as AI kind of takes over and approaches the strong AI?

John: Yeah, that’s a good question. Let me bracket the issue of strong AI, because there are enough interesting issues for parents that arise with regard to weak AI as it is. So let’s just bracket the strong AI for the moment.

John: If I were a parent, and I’m not, but if I were a parent there’s a couple of, I suppose, basic points that I would have in my mind about trying to steer my child, or at least guide my child in the path that will lead them to something good and wholesome for their future. A career that they can get something out of and that will last as long as they’re happy to stay in it.

John: Some really basic things. Your child should end up doing, ideally, what they’re good at doing, and what they enjoy doing. And generally, if you’re good at doing something, you tend to enjoy it. So that’s a cardinal principle that will stand the test of time.

John: Now with that in the background, then we’ve got this reality that more and more of the, let’s call them process-driven tasks, process-driven jobs. Anything that can be broken down into small parts.

John: Any task that can be broken down into smaller tasks, and that doesn’t require too much in the way of individual human discretion or judgment, right? So it’s more formulaic. Any task that’s of that character will be increasingly automated as we go forward.

John: With that in the background, what does that leave? Well, I think what that does is, it means that the roles that had traditionally been roles performed by women. So the caring professions, teaching, nursing, counseling, those sort of roles, are not really in danger of being automated.

Andy: Interesting.

John: The jobs that are, are the ones that now have traditionally been performed by men. Banking, finance, lots of parts of engineering. Even lots of aspects of law, as well. A lot of that can be automated and made to fit into a kind of mold where it’s routine-based. It’s iterative, it’s recursive.

John: So I think that’s an interesting development that we’re going to see, I believe, that we are going to see happen over the next say 20, 30 years. Whether this means that those professions that traditionally women have performed, in the caring professions, whether they will be remunerated better, is another question.

John: There’s good enough reason, I think, to believe that they won’t be any better remunerated than they are now. Because generally, the way our economies tend to work is that you get paid more for a job, the rarer the job is, and the harder it is to perform. But caring roles are generally not rare. I mean, everyone has the capacity to be empathic, to feel compassionate, because that’s just who we are as humans.

John: So there’s an issue there. I’m not sure that we will see a recalibration of wages in the labor market. But what I think will probably happen is that more men will be flocking to those traditionally women jobs, just because so much of the traditional male jobs will now be performed by machines.

John: To bring it back to what you do with your children, always you’re navigating by the star of your child’s own innate interests, and what they’re good at. I would try to steer them in something that I think had a future. And the less process-driven and formulaic and algorithmic a job, the more chance it will survive into the future.

Andy: I hear people talking like, “Aren’t the jobs of the future going to be that we need people to manage the AI? We’re all going to be sitting around and programming the AI, and going through and watching what it’s doing. The factory people of the future aren’t actually doing the work themselves, they’re using the software and managing all of that.”

Andy: So are we wanting to master AI ourselves, or push our kids into fields that are related to AI or education that will make them proficient in that? Or is that a losing battle?

John: It is a very good question. We can all agree that reading, writing, and arithmetic are just the basic solid foundation of any education. We don’t even question it. That’s just kindergarten, first grade, second grade, all the way through. That’s just the foundation of an education.

John: I think that we should have coding added to that. So what we should be teaching our kids is reading, writing, arithmetic, and coding. But then we don’t aspire in life to become professional readers, or people that perform arithmetic as a profession.

Andy: Arithmetic-ers.

John: Yeah, yeah, right! These are just taken to be the basic building blocks of living in a complex civilized society. And coding should be in the same category.

John: On the one hand, what I’m saying is that we all need to be better at coding, and we need to have that knowledge imparted to our children. On the other hand I’m saying, don’t think of a career in terms of being able to code. Think of every career as somehow involving an element of coding, or in which being able to code becomes handy at times.

Andy: Beneficial. Yup.

John: And that then means that we’re asking a different question. We’re not really interested in having a society where everybody does computer science, and where the universities are just spewing out computer science graduates.

John: We’re thinking more in terms of, what are the jobs that will survive? To what extent will they involve formula and process-driven work? Let’s try to encourage our kids to develop their talents in that direction, and then we’ll have coding as part of the curriculum, come what may.

John: The reason why I’m skeptical about encouraging more students than necessary to engage in IT work just because we’re on the cusp of this machine learning revolution, is because if you think about it, once you have an entire, say, production facility computerized… Honestly, compare that with the same production facility that’s staffed by humans.

John: Now you said that we’ll need people to maintain the computers. Sure. But how many people do we really need to be able to maintain a production facility? I’m willing to bet that if you had a factory with 50 people doing the production, and then one where the computer did everything, you might only need two, three, or four technicians. The ratio of maintenance guy or gal to computer is a lot lower.

Andy: What about this idea of bias? Can AI be biased? And I guess specifically with regard to teenagers, how would that matter?

John: If you look at who is doing IT and artificial intelligence and machine learning, you look at the industry and the composition of the industry, it’s overwhelmingly white, straight men. Overwhelmingly. As for a whole series of demographic and sociological factors if that’s the case, but that’s the way it is.

John: It’s unsurprising to learn that these technologies that are being developed by straight, white men, are going to be created in the mold of straight, white men. They’re going to be tested on straight, white men. They’re going to reflect the assumptions that straight, white men have about the world.

John: I mean, obviously there’s no one abstract thing called a straight, white man. We’re all very different. But if you include other ethnicities, other cultures, other sexualities, other genders. If you include as many types of human in the mix, you’ll get a very different kind of technology. One that reflects the assumptions of all of these different people.

John: And it’s become sort of a notorious fact about AI tech that it tends to perform really well… You name it, whatever technology we’re talking about, it will tend to perform very well on a straight, white male, or at least a white male. And then as you deviate from that sort of type, it degrades. Performance degrades.

John: So by the time you get to an Asian woman, or a transgender individual, it starts really degrading. It doesn’t know what it’s dealing with. An example would be, a classifier that’s meant to recognize your face, or is meant to label someone for some characteristic. Maybe let’s just say that it’s meant to recognize whether you’re old or young, or whether you’re a woman or a man, not sure. Whatever the criterion might be.

John: Well, if you give that system a white man, it will probably get the right answer, whatever the classification it’s doing. And if you derogate from that, you tend to get increasingly wrong answers.

John: There’s a great documentary on Netflix at the moment called Coded Bias. And it’s a fantastic summary of all the issues with AI about bias. Really, really good.

About John Zerilli

John Zerilli, PhD, is the author of A Citizen’s Guide to Artificial Intelligence and The Adaptable Mind. He is currently a Leverhulme Fellow at the University of Oxford and an Associate Fellow at the Centre for the Future of Intelligence in the University of Cambridge. His area of expertise is at the cross section of philosophy, cognitive science, artificial intelligence, and the law. 

Dr Zerilli was the recipient of a Cambridge Commonwealth Trust scholarship to undertake the Cambridge LL.M. (2008), wrote the highest-ranking thesis of his year at Cambridge (2009), and won the Lucy Firth Prize for best publication in philosophy at Sydney University (2010). He was called to the Sydney bar in 2011. He has published numerous articles, canvassing law, political economy, philosophy, and cognitive science, which have appeared in journals such as Philosophy of Science, Synthese, and Philosophical Psychology.

Originally from Australia, John recently became a resident of Oxford in England.

Want More Dr. Zerilli?

Find John at Oxford University, LinkedIn, and Twitter.