Home / Technology / AI Won’t Fix Education, but it Can Help

AI Won’t Fix Education, but it Can Help

Now more than ever, our teachers need help. Faced with an unprecedented crisis of having to move all instruction online in a matter of weeks, everyone is figuring out the new playbook for teaching in the 21st century. Here is artificial intelligence, and education — AI won’t fix education, but it can help.

One of the topics discussed, alongside video chats, online quizzes, and remote attendance is artificial intelligence. Indeed, it’s time to take a closer look at how AI can be helpful to our instructors today. But, the problem is that very few people understand AI.

Do you understand AI?

I do not mean that very few people understand the computer languages and algorithms that drive artificial intelligence, although that’s true for many. I mean that very few people understand the pattern of what “artificial intelligence” means to human beings in a macro sense.

Not having an understanding of something as crucial to the future as artificial intelligence has consequences. Those consequences will cross all of the business sectors, investment, technology, media, and will primarily be apparent when those sectors converge in a hype cycle or are thrust into the unknown.

We think AI is a solid thing.

We tend to think AI is a solid thing like a concrete construction. We believe that things are either AI or they are not AI. But history demonstrates that’s not how it works. Depending on where we are on any given day, artificial intelligence is a moving target.

Just having passed the 50th anniversary of the first moon landing, it’s helpful to remember that at that time, the best minds in the world were working on the space effort. We thought that machines doing the heavy trigonometry and calculus required for guiding rockets would have to be “artificially intelligent.”

Collectively, we thought, “this math is hard, you have to be very intelligent to do it,” so we surmised that machines doing this work were intelligent, thinking machines.

We stopped thinking of them as being artificially intelligent once computers did fancy math faster and better than humans. Humans decided that what computers did was just advanced number crunching, and we moved the goal post.

In the 1970s and 1980s, many smart people believed a computer couldn’t master the strategic and creative thinking in chess. We used to believe that chess was the ultimate intelligent endeavor.

A computer that understood chess had to be intelligent. In the 1990s, computers began regularly beating the planet’s best chess masters. We accepted the fact then that intelligence didn’t win chess games. Computers dominated chess because they could cycle through many more potential moves far faster than any human.

Suddenly, a machine that could win at chess was a specialized algorithm, not artificially intelligent.

Similarly, the complexity of human speech was potentially beyond the reach of machines before Siri, Alexa, and Google brought speech recognition into our phones and homes. The idea that someone could say, “Alexa, order pizza,” and pizza would come to your door had to be powered by artificially intelligent computers that could understand and obey.

But now we see speech recognition as just translating from soundwaves to text, and then processing the text with some simple rules, and there’s no “thinking” anywhere in the process. And as we’ve come to accept that, we’ve discounted it and moved the AI target again.

Understanding AI as being fluid

The point is that AI isn’t an object with a solid beginning point. It’s the next thing on the horizon, and today, that horizon looks very different. Once we understand this “object” and what it does, once we accept it and understand it, we won’t think of it as AI anymore, and we’ll move on to the next goal in smart machinery.

If history has taught us anything, we may never get to a place where we say, “Alright, that’s AI.”

We won’t be able to recognize what AI is until we have machines that mimic or surpass human intelligence and reasoning. We refer to that kind of depth and breadth as “Artificial General Intelligence.” While this scenario will almost certainly happen — it’s a long way off.

That does not mean we won’t develop compelling solutions to complex problems. The innovative products that come from complex issues will generally make our lives better. We will have plenty of those, probably at an accelerated pace now. The area of AI where I work, in education, offers good examples.

AI software.

We are, right now, working on software that can sort responses and help instructors assign grades to written coursework at high speed and in great quantities.

We are constantly improving teaching, learning, and course work. AI, as a tool is helpful particularly when you can’t meet with students personally. Other companies have developed computer tutors that can answer questions about certain subjects. These computers can also suggest the most helpful learning resources.

Answering questions is extremely helpful when students aren’t in physical classrooms. They can’t raise their hands or ask classmates if they aren’t sitting, physically, in a certain space.

AI products.

The AI products are fundamentally filling some of the human activities of teaching. However, they do things that people can do so we think of them as being artificial or driven by AI. But like their chess-playing ancestors, the AI-enabled tutors and auto-graders won’t actually be thinking as a human would.

AI tutors.

These AI tutors comb through data, looking for patterns and similarities, and acting accordingly. And one day, we will eventually view these AI-enabled tools as commonly acceptable technology activities. We’ll start thinking of grading AI the way we think of chess AI, a tool to help us get better at or do something more efficiently.

AI is a tool

Knowing that AI is a tool leads us to another hangup about AI. It’s a tool — not a decision-maker. “Smart” computers may calculate and track the path of a projectile faster and more accurately than a person. But, AI cannot decide whether we should go to the moon.

Similarly, in education, AI won’t replace teachers, with their ability to connect, inspire, and guide students to become better people. Instead, AI will make many tasks easier and faster as it did for NASA flight engineers 50 years ago.

AI is here — and it’s perpetually just around the corner.

The point is that AI isn’t something we’re waiting for since it’s both here already and perpetually around the next corner. We can, and should, expect bread-and-butter shortcuts and solutions to things we think are complex or difficult. But that’s technology, not AI.

We get technology. Tech is something we see, experience – a thing we can make and invest in. We can call it AI if we want. We probably just won’t call it that for very long.

Sergey Karayev

Head of AI for STEM at Turnitin, Co-Founder of Gradescope

My goal is to develop and deploy AI systems to improve human life. In 2014, I finished a PhD in Computer Science at UC Berkeley, advised by Trevor Darrell, and co-founded Gradescope, where we develop AI to transform grading into learning. In 2018, Gradescope was acquired by Turnitin, a leading ed tech provider. Recently, I have co-organized a weekend program on Full Stack Deep Learning, and was also fortunate to be selected for the 2019 UW Engineering Early Career Award.

About admin

Check Also

Steam can finally help you choose which of your games to ‘Play Next’

Valve has added yet another one of its Steam Labs experiments into Steam proper. The …

Leave a Reply

Your email address will not be published. Required fields are marked *