Jaron Lanier, author of “You are not a gadget”, is very well-informed about what he is writing about, which is some of the social consequences of the internet, and some of the implicit ideologies that are built into the internet as we are living with it today. Lanier was one of the early minds behind virtual reality and has helped create a lot of the technology that shapes how we live and how we think. In his book, “You are not a gadget” (Knopf NY 2010) he offers some reflections on this technology, recent trends and coming trends, and the relationship of the technology to society.
Jaron Lanier, author of “You are not a gadget”, is very well-informed about what he is writing about, which is some of the social consequences of the internet, and some of the implicit ideologies that are built into the internet as we are living with it today. Lanier was one of the early minds behind virtual reality and has helped create a lot of the technology that shapes how we live and how we think. In his book, “You are not a gadget” (Knopf NY 2010) he offers some reflections on this technology, recent trends and coming trends, and the relationship of the technology to society.
Because he is so well informed and such an authority on the topics, I was inclined to just read the book and agree with everything. Which I did, until I couldn’t any more. I think the book has a lot of interest and a lot of value, and I will try to identify areas of agreement and disagreement. The book contains various interesting nuggets, Lanier’s ideas in various fields from finance to virtual reality to biology. I won’t address those. I will confine myself to some of Lanier’s ideas about the politics of technology and on econmic models.
Ideologies built in to technology
One of the interesting themes or ideas of the book is that, while ideologues (writers, say, or teachers) try to use arguments to influence people and make change in society, technologists do something much more powerful: they make designs which, if they are adopted, make you live the ideology so you may not even realize that you are living according to some technologist’s choice. This is powerful, and a lot of the book tries to identify what some of these ideologies are, that are buried in the technology that we are using.
In Part One, “What Is a Person?” Lanier looks at some of the social networking sites, like Facebook, and contrasts them to the early internet of the 1990s. In those days, people’s pages looked very different from one another and could contain any number of possibilities. They were constrained by all kinds of elements of technology (such as file-based information architecture, which is common to every single computer and for which there is no alternative), but when compared with, say Facebook, they had many more possibilities for individualization. In Facebook, your identity consists of your selections of answers to multiple choice questions (including relationship status, etc.). Computers use representations of information. How do you represent a person, or a real thing? With a form of some kind.
As an example that isn’t in Lanier’s book, think of the Iraq or Afghan War diaries. These turn events in which real people were killed into database entries. We can study these events and look for patterns in them, but only what was entered into the database. So when you look at the entry for the Haditha massacre in Iraq, it’s this tiny little redacted thing. There was a lot more going on than made it into that database.
Lanier discusses the computer representation of music (the format is called MIDI), which has restricted a lot of what is possible to do in digital music even as it opened all kinds of possibilities. It laid firmly down what a note was. And once a format becomes the standard, technological “lock-in” means it is very hard to change. So we might be stuck with the choices we make now. Lanier’s fear is that we are going to come up with standardized formats about how to represent people, and that we are going to be stuck with those choices in the future.
Apparently, and I didn’t know this, but apparently there is an obsession among technologists with the idea that the internet itself, or computers, might become conscious and intelligent one day, or already is. And that we could upload our consciousness into machines and live forever. Lanier fears that this belief, which he doesn’t think much of (and neither do I) is one of those ideologies that is being built into our technology today. And his fear specifically is that this “hive mind” idea will require people to restrict themselves to fit into the ideology. Like, machines can be conscious if we are all willing to restrict our definition of consciousness to what machines can be and do. So that we become more like machines, rather than having our technology and tools serve us.
The idea, or ideology, that crowds are more intelligent than individuals has led to the celebration of wiki-based and crowd-sourced solutions to problems. But because it is taken as a doctrine rather than as a hypothesis, we don’t know much about the conditions when crowds are smarter than individuals (or stupider than individuals). Worse, the emphasis on anonymity (and, specifically “drive-by anonymity”) has led to a lack of accountability and has brought out the troll in each of us: “the troll-evoking design is effortless, consequence-free, transient anonymity in the service of a goal, such as promoting a point of view, that stands entirely apart from one’s identity or personality” (pg. 63). I agree with this as well. Crowds are smarter than individuals in some conditions, not in others. A lot of political theory (and specifically liberal theory) is designed to make it possible for society to take advantage of both the wisdom of the people and to protect the individual’s rights and dissent.
Another very interesting critique made by Lanier is of the analysis of security. Probably the most popular writer on this topic is Bruce Schneier, whose blog I follow, not mentioned in Laner’s book. In Lanier’s estimation, the security analysis is part of a pervasive “ideology of violation”. He tells this story (pg. 65):
“In 2008, researchers from the University of Massechusetts at Amherst and the University of Washington presented papers at two of these conferences (called Defcon and Black Hat), disclosing a bizarre form of attack that had apparently not been expressed in public before, even in works of fiction. They had spent two years of team effort figuring out how to use mobile phone technologyt o hack into a pacemaker and turn it off by remote control, in order to kill a person…
“… there is a strenuously constructed lattice of arguments that decorate this murderous behavior so that it looks grand and new… what if they had devoted a lab in an elite university to finding a new way to imperceptibly tamper with skis to cause fatal accidents on the slopes? These are certainly doable projects, but because they are not digital, they don’t support an illusion of ethics.”
The ideology of violation states that finding and publicizing ways to attack society makes society safer. According to the ideology of violation, the only alternative is the “security through obscurity”, which doesn’t work because the “internet is supposed to have made obscurity obsolete”. Lanier disagrees (pg. 67): “Surely obscurity is the only fundamental form of security that exists, and the internet by itself doesn’t make it obsolete… the reason that computer viruses infect PCs more than Macs is not that a Mac is any better engineered, but that it is relatively obscure. PCs are more commonplace. This means that there is more return on the effort to crack PCs.”
Lanier nuances his analysis of the ideology of violation with the possibility that good can come from these kinds of security analysis (in programming for example), but that the pacemaker example (two labs and two years to come up with an idea that is only useful for murdering someone) shows how far the ideology has gone out of control. I think that Schneier would argue that failing to understand the threats leads to irrational fears, whereas having rational fears can lead to having rational security policies. But Lanier points out that some of the studies of exploitation is itself spreading fear, and having antisocial results.
Lanier’s economics
Arguing that the GNU/Linux operating system is basically a copy of a 30-year old system (Unix), and that Wikipedia is basically a copy of an encyclopedia, he suggests that the two great open/free collaborative poster children are not very innovative or impressive. Most of the Youtube videos you’ll see are prank videos of the “blooper” type, or mashups of high-budget studio productions of the kind whose revenue streams are being destroyed by the very web that they are getting so many views on. The few companies that make their billions from controlling and directing the traffic, do so through ad revenue, which is ironic as ads had been one of the things the early internet ideologues had hoped to destroy. Crowds are not creative like individuals and individuals cannot work for free. Lanier contrasts open source effrots with the iPhone, and other really innovative things, which have come from individuals “conceiving the vision and directing a team of people earning salaries.” (pg. 132)
As a way out of this, he argues for a system where individuals could be paid each time others access their content. An individual using the content would pay a little bit for each bit (essay, video, song) and each time. This would enable very popular individuals to accumulate wealth and have a built-in revenue model for everyone involved in culture. The “only alternative” to this vision, Lanier argues, “would be to establish a form of socialism” (pg. 103). He provides some cautions about socialism, and asks “Can a digital version of socialism also provide dignity and privacy? I view that as an important issue – and a very hard one to resolve.” (pg. 104)
I disagree with Lanier’s critique open-source or free-software, and of his prescriptions (which involve various ways of making things more proprietary, ways that I think would have to involve quite a bit of repression to actually work). I think that Lanier’s critique extrapolates too much from a few cases and overstates innovation in proprietary contexts. Innovation isn’t something that people do when they get paid for it. It is a part of human nature, and people will do it if they have the opportunity and the tools, in proprietary or open contexts, whether they are paid fantastically or at a subsistence level (though probably not if they’re hungry). Accumulating wealth is a totally different issue altogether, and Lanier makes a connection between the two that isn’t really there. This is a capitalist system. To make huge amounts of money, you have to find a way to take other people’s work, steal something from other people or other species that no one has paid for before, or manipulate financial systems and laws so that a bunch of the newly invented money goes to you. None of this has anything to do with innovation. The value of free software for innovation is just that it’s easier to innovate if you have access to what other people have done before. Lanier doesn’t really deny this, he only argues that proprietary systems have been more innovative (but I don’t think his dismissal of Wikipedia and GNU/Linux and his celebration of the iPhone constitute enough evidence for this).
In other words, society needs innovation but it does not need to tie innovation to the accumulation of wealth in order to encourage it. Innovation is not, today, tied to accumulation of wealth. Today, those who have accumulated wealth are able to seize and control other people’s innovations. The question of the distribution of wealth is a separate issue from that of innovation, and another question that Lanier and I disagree about.
Lanier’s discussion of “digital socialism” is short (pg. 103-104) and consists mainly of cautions – if socialism is a taboo subject, he asks, then we probably aren’t ready for it, are we? How could physical allocation be done in socialism? These are rhetorical questions for Lanier, but for me they are serious ones. Lanier argues that “Private property in a market framework provides one way to avoid a deadening standard in shaping the boundaries of privacy. This is why a market economy can enhance individuality, self-determination, and dignity, at least for those who do well in it. (That not everybody does will is a problem, of course…)”
I think that “private property in a market framework” requires constant violence to enforce – the constant seizure of things from nature, the exclusion of people from the means of survival, the seizure of their work. Past socialist societies haven’t protected dignity and privacy all that well, but neither have capitalist ones. If yesterday’s designs have locked in ideologies about “hive minds” and violation, as I agree that they have, they have also locked in ideologies about class, inequality, the superiority and domination of some over the rest, etc. Couldn’t tomorrow’s designs lock in ideas about equality, cooperation, harmony? In my form of socialism, called participatory economics and written up by Michael Albert and Robin Hahnel, people contribute their work in an organized way and are remunerated based on the effort and sacrifice they put in at jobs that are balanced for empowerment and interest across the society. I think such a system would have plenty of innovation and would protect privacy and dignity better than capitalism or previous versions of socialism.
Bathed in goodwill
I agree with Lanier that human ends should be the basis of our technological designs. He advocates “digital humanism”, and so do I. But I do not agree with his economic prescriptions for bringing it about. We agree about the failures of advertising revenue and we agree that an ideology that expects everything for free is a dead end. But I believe that if care is taken in its design, socialism would be much more compatible with these humanist ends than capitalism has been. Lanier’s epiphany (pg. 107), with which I’ll conclude, is as eloquent an argument for my belief as anything else I could write.
“I had an epiphany once that I wish I could stimulate in everyone else. The plausibility of our human world, the fact that the buildings don’t all fall down and you can eat unpoisoned food that someone grew, is immediate palpable evidence of an ocean of goodwill and good behavior from almost everyone, living or dead. We are bathed in what can be called love…
“And yet that love shows itself best through the constraints of civilization, because those constraints compensate for the flaws of human nature. We must see ourselves honestly, and engage ourselves realistically, in order to become better.”
Indeed.
Justin Podur is a Toronto-based writer. His blog is justin@killingtrain.com.