Pages

Computers...taking over...must...make...soulful art...

I've been thinking a lot about this recent op-ed by Jaron Lanier, which claims that we commit a "devaluation of human thought" when we treat computers like minds.

Lanier gets a number of the facts right: the artificial intelligence spin is a sure boost to any technological story, and the analogies between computer "intelligence" and human intelligence are largely overblown in order to win press coverage. He's also right that it's dangerous to imagine that we'll ever know for sure if it's possible to download human minds into software; any tests we can devise to "make sure" that consciousness has been effectively digitized will be the exact tests that programmers and designers are building systems to satisfy. (We all know that it's possible to train kids to ace a test without educating them; the same risk exists in the design of computer intelligence systems.)

For the most part, I side with N. Katherine Hayles, whose book How We Became Posthuman explains how we came to believe in the idea of a machine intelligence that could be equal to or greater than our own. Hayles has serious doubts about that possibility, namely because it depends upon a belief that the nature of human (or even animal) intelligence could be divorced from its material structure. The mind, she points out, doesn't operate on simple binary principles; until we can create structures that exactly mimic the material workings of the human brain--and to do that, they must be incorporated into a nervous system and "body" that closely parallels our own--we won't ever create an intelligence that works like ours, but better.

That said, I think Lanier has overly simplistic and idealistic images of the bottomless wonder of human intelligence and creativity. He wants to preserve our gasping admiration of "personhood," and feels threatened by the possibility that we may "begin to think of people more and more as computers."

This concern--that human individualism is under fire--has been around for about as long as the idea of human individualism, and I don't see what's compelling or new about Lanier's concern about A.I. It could just as easily be Lanier panicking about the Industrial Revolution turning people into machine parts, about capitalism turning people into "the bottom line"--hell, he could be panicking about how the invention of writing devalues the presence of human teachers, advisors, and memory. His worries that people are passively letting computers mechanically determine their aesthetic choices through Netflix or Amazon is particularly offensive, since it imagines that

(1.) anyone is actually mindlessly doing such a thing [!?] and that
(2.) the aesthetic is the realm of individual expression and taste, which--prior to the existence of evil mechanical algorithms--was some sort of sacred realm in which untarnished individuality blossomed.

I guess those of us who study the arts should be grateful for this bone being tossed our way by a member of one of the more "practical," "sensible" fields. But this is what scientists and engineers always do--they make pleas for the arts and the aesthetic by turning art into a wonderful world that defies logic and keeps us grounded, soulful, human. If this is the price to be paid to get public respect for students and producers of the arts--if we're allowed to be kept around only as the infantilized and inexplicably necessary darlings of scientists, as their extracurricular activities, toddlers and lap dogs--I think most of us would rather be put down.

Let me offer my grounded, soulful, human rebuttal to Lanier's paranoia. Any useful technological breakthrough acts as an extension or distribution of mental processing, so it does work as part of a mind--one that includes human minds in its circuit. It's ridiculous to imagine that the human mind is somehow separate from these systems--as if it weren't altered by the use of a technology like language or writing. All these technologies interact with human minds in reciprocal ways. Recognizing the importance of the human brain in this system is crucial, but we should also recognize that the line between "active mind" and "passive tool" is more arbitrary than someone like Lanier--who wants us to think of computers as "inert, passive tools" to keep his idea of the ineffable human intact--is willing to admit.