Is Technology Making Us Stupid?


Moral panics over new technology are as old as recorded history. Socrates railed against writing. In Plato’s Phaedrus, he feared that if people relied on the written word they would “cease to exercise their memory and become forgetful.”

Printing triggered its own panic. The fifteenth-century abbot Johannes Trithemius, author of De laude scriptorum manualium — “In Praise of Scribes” — worried that books would mean monastic scribes would lose their spiritual connection with God. “It is only by the act of copying the Scriptures that a scribe can become truly in touch with the word of God,” wrote author David Malki.

Those fears proved unfounded. But modern cognitive science, which has unlocked many of the secrets of the brain, appears to back up the claim that yes, modern technology might actually be making us less clever.

Facts Are So Yesterday

Take using screens to glean information. In a paper in Science in 2009 [PDF], Patricia Greenfield, professor of developmental psychology at UCLA, reported, “every medium develops some cognitive skills at the expense of others. Although the visual capabilities of … the Internet may develop impressive visual intelligence, the cost seems to be deep processing: mindful knowledge acquisition, inductive analysis, critical thinking, imagination and reflection.”

But there are even more profound worries, that the Internet with its vast capacity for facts, will replace the need for humans to learn things. Schools don’t need to teach facts, they just need to teach Google. Such claims are widely repeated. Jim Taylor, a professor at the University of San Francisco, made such a claim in an article for Psychology Today in December 2012. “Rather than making children stupid, it [technology] may just be making them different. For example, the ubiquitous use of Internet search engines is causing children to become less adept at remembering things and more skilled at remembering where to find things. Given the ease with which information can be found these days, it only stands to reason that knowing where to look is becoming more important for children than actually knowing something.”

Memory Is The Residue Of Thought

This, according to Daisy Christodoulou, author of The Seven Myths About Education, fundamentally misunderstands how the brain works. Facts, or chunks of information, are the very stuff that our brains manipulate when we think. “Memory is the residue of thought,” she says. “We remember what we think about. If we accept that memory is important and we accept that in order to remember you have to think about it, then it all becomes about what people think about.

Math Made Fun

One organization that does not believe that computers are making us stupid is Computer-Based Math, a body that is driving a completely new form of math curriculum, one that sees the majority of the heavy work done by computers.

The brainchild of Conrad Wolfram, one of the two brothers behind Wolfram Alpha, a computational knowledge engine, and more famously the team behind Mathematica, computation software for scientists, Computer-Based Math hopes to free school children from the drudgery of solving quadratics and allow them instead to enjoy mathematics.

“Why get students emulating what computers do so much better (computing) rather than concentrate on imaginative thinking, analysis and problem-solving that students ought to be able to do so much better even than today’s computers?” Wolfram wrote on his blog.

Too often, he says, we insist on teaching children things for no other reason than because it is good for you. “Should you learn long division by hand?” he says. “I do not see the purpose in it. I have never done long division. I have done approximate long division but that is a totally different process. No one today would do it. You would use a computer or a calculator.

Wolfram is not advocating that children should not learn the basics of mathematics but that once the principles are learned there is little to be gained by forcing them to do endless examples of it once the understanding is there.

“Doing basic equation-solving will help understand what is going on in an equation, but it doesn’t really help you to understand what’s going on in division to learn long division.”

Instead of quizzing children on factoring polynomials by hand, he advocates more open-ended questions. In an earlier interview Wolfram said: “Take a problem like, ‘Am I normal?’ There is no one answer to that, but it is about how do you measure normal? There are different ways to measure it, different problems that need to be tackled. Data needs to be interpreted. It is not like calculating a quadratic equation.”

One of the countries to adopt parts of the CBM curriculum is Estonia, which in 2013 ran a pilot of 30 schools to replace the existing statistics curriculum with a CBM scheme.

“I think it has gone rather well,” says Wolfram. “The teachers were all a bit nervous before because it’s all very different. I think for the most part they found it much more enriching to teach.

“Some modules I thought were hard, the students reported were easier than traditional maths. I think the reason for that is that they understood what they were doing and have more fun.”

“Plenty of teachers, me included, will have experience of taking children to the computer lab and getting [them] to work on something. They spend a lot of time on PowerPoint doing the animation or worrying about the fonts. What they end up thinking about are the fonts.”

She cites the work of U.S. cognitive scientist Daniel Willingham. “Data from the last 30 years lead to a conclusion that is not scientifically challengeable: thinking well requires knowing facts,” he wrote. “You could summarize this view by noting that to think is a transitive verb. You need something to think about.”

But not all agree. Conrad Wolfram, part of the team behind Mathematica, a computational software program used in many scientific, engineering, mathematical and computing fields, also says that far from making us stupid, computers are freeing us up to do high-order thinking while they mop up the drudgery.

“What human history seems to show is that what the machine enables us to do is to go further. Going further usually is conceptually and intellectually much harder than what the machine was originally doing.

Humans Are Having To Think In A Much More Sophisticated Way

“The machines are now doing all sorts of things that we thought were very human, but I don’t think that means we get dumber; it actually means we get smarter and more sophisticated,” says Wolfram. “If you look in science and maths since computers have done the calculating that has meant that the problems in science, maths, engineering, all these places, have got incredibly much more complex. The humans are having to think in a much more sophisticated way, drawing on more facts than before since computers.”

Luc de Brabandere, a Boston Consulting Group Fellow and guest speaker at DLD, says that humans will always be more intelligent than computers because the way we think differs from the way they process information.

There are two forms of reasoning: deductive and inductive. Deductive reasoning is the stuff of first-year philosophy students. “All men are mortal. Socrates is a man. Therefore Socrates is mortal.” If the predicates are true then the conclusion must be true.

“Deduction is the world of logic, of algorithms, and logic. I am convinced that one day, maybe not far away, deduction will be 100% done on machines,” says de Brabandere.
Inductive truths take the form of, “Because the sun came up yesterday, it will come up tomorrow.” The predicate is true and the conclusion may be true, but it isn’t necessarily so. Induction, says de Brabandere, is beyond the machine. Why? “In order to induce you need to forget. A machine cannot forget. You cannot program a machine to forget, otherwise it’s not forgetting, it’s just another algorithm.”

I Do Not See Charles Darwin Being Replaced By Big Data

The reason you need to forget is, he says, to allow you to deal with the real world. In order to understand the world we need categories, collections of things that are similar. “When a parent says, ‘I am thinking about my children,’ this is true. When the CEO of a bank tells somebody, “I am thinking about my clients,” this is not true. You cannot think about two million people. You need the CEO to produce some concepts or categories, like ‘retired,’ or ‘student,’ or ‘young mother,’ and so on. Of course to produce the category ‘student’ you have to lose a lot about the details of a student, but as a reward you now can change the world because you simplify the world.”
Big Data, which many have claimed will reveal insights that are truly new, is a chimera. It can only reveal what it already knows. De Brabandere draws an example from history.

“Take Kepler, the astronomer, who realized that the orbit of a planet was not a circle but an ellipse. I think big data can do his job because the concept of an ellipse existed before Kepler. The machine can definitely see it and compare and say, ‘oh, there is an ellipse.’

“But if you take Charles Darwin, his big data was thousands of pages of observations. He came up with the survival of the fittest concept. This concept did not exist before. I do not see Charles Darwin being replaced by big data.”

But if as de Brabandere argues, categories allow us to think about the world by simplifying it, then even here, technology poses a danger.

This For Philosophers Is A Big Shock

“This idea of category today is challenged by Google. If you type into Google “philosopher,” what happens? Google proposes categories. You can see just below like the ‘Philosophers of the Renaissance,’ or ‘Greek philosophers,’ and so on. This, for a philosopher, is a big shock. For the first time in the history of man, a machine can propose categories.”

Why is this a risk? “I remember 20 years ago I read a book called Mindstorms,” says de Brabandere. “It was about children and computers. The last sentence of the book was, “in the end the question is who is programming who?’ Is it the child with the mouse telling the machine to do this, or is the machine with a particular message on the screen telling the child to do that. Who is programming who?”

If the computer gives us our categories, and categories are how we think about the world, then we are letting computers frame our conceptual understanding of the world. We become a child clicking the box on the screen because the computer told us to.

The key, says Christodoulou, is to plot a path between techno-utopia and neo-luddism. “Any use of technology in any field has to be thought very carefully about what you are trying to achieve.

“We certainly have not made people cleverer, even if we haven’t made them more stupid.”




Related posts