TAG MAPcontentquery
LOG

MAP BROWSING HISTORY

MAP LEGEND

CONTENT TYPES
Texts Videos Images Authors Projects
TAG TYPES
General Tags Technologies Authors Places Names
SPECIAL TYPES
Root Topics

Deep Search. The Politics of Search beyond Google - Introduction

An introduction to the book Deep Search and an overview of topics of the event series on the politics of search and digital classification.

It’s hard to avoid search engines these days. They have established themselves as essential tools for navigating the dynamic, expanding informational landscape that is the Internet. The Internet’s distributed structure doesn’t allow for any centralized index or catalog; consequently, search engines fulfill a central task of making digital information accessible and thus usable. It’s hard to imagine our lives without their ever-expanding digital dimension; and so it’s increasingly hard to imagine life without search engines. This book looks at the long history of the struggle to impose order on the always-fragile information universe, addresses key social and political issues raised by today’s search engines, and envisions approaches that would break with the current paradigms they impose.

At the moment, Google occupies a unique place in the field of digital search. It dominates the end-user market to a degree that it approaches a practical monopoly in many countries around the world.1 Its ambitions far outstrip those of every other search company. Almost weekly, new initiatives are announced, many of them on a gigantic scale – from digitizing millions of books to rolling out an entire platform for mobile communication designed specifically for maximum integration with Google’s wide array of services. Even a total refusal to use Google would offer no escape. So many others use it (for example, in the form of its Gmail email service) that a substantial amount of one’s personal email communications will end up in its domain. And so many web sites and services are interwoven with its various offerings that it is unavoidable. Even if one insists of obtaining news in print, Google is there: reports about its offerings are everywhere, from the promotional to the critical, from the superficial to the in-depth. And, with each passing week, we hear more and more about the dangers posed by such a powerful entity inserting itself into so many aspects of our individual and collective existence. Thus, Google not only dominates markets, it also dominates our minds – to such a degree that it is difficult not to conflate the generic issue of search engines with the specific practices of Google. This is unfortunate, since the issues of search, questions of classification, and access to information run much deeper than the business model of a single company. They represent historical shifts (as well as continuities) of how we relate to the world. It is doubly unfortunate, because Google‘s primary business is no longer search per se; rather, it has become an advertising company (the source of 98% of its revenues2). “Search” is just one of the products that create an environment in which access to individual users, identified through detailed, personal data profiles, can be monetized.

Yet, no matter what the stock market momentarily thinks of the “tech sector,” we are still in the just foundational phase of the digital information environment. The speed and dynamism with which search technologies are developing are testimony to that. Keeping up with even the most general details is daunting, to say nothing of the technical specifics, which in any case are often shrouded in trade secrets and opaque research. Fortunately, keeping up with this breathless innovation is the wrong approach. We need instead to shift our focus to the structural, long-term political issues that are emerging in this development. It is crucial that we acknowledge both the willful design of the tools we increasingly depend upon, and that their artifacts – most obviously hierarchies in markets and in search results – are anything neither natural nor arbitrary. Innocent utilities that blend into the routine of everyday work and leisure subtly bend our perceptions and weave their threads into the fabric of our cognitive reality. Most people accept the framing through these technologies uncritically.3 Doing so is dangerous.

Looking into the social and technological construction of information and knowledge, we ask some basic questions: How is computer-readable significance produced? How is meaning involved in machine communication? Where is the emancipatory potential of having access to such vast amounts of information? And where are the dangers of having to rely on search engines – particularly when operated by opaque monopolies – to make use of that information? Could it all be different? These questions of culture, context, and classification in information systems are crucial: what is at stake is nothing less than how we, as individuals and collectives, find out about the world.

Though rarely thought of as a “mass medium”, search engines occupy a critical juncture in our networked society. In many ways (and increasingly), their influence on our culture, economy, and politics dwarfs that of the broadcast networks, radio stations, and newspapers. Yet we still do not understand the kind of power they exert. It’s clearly not reducible to classic editorial issues. Located at bottlenecks of our information infrastructures, search engines exercise extraordinary control over data flow in what otherwise are largely decentralized networks. Their resulting power is, as always, accompanied by opportunities for abuse and by concerns how to assure its legitimate and appropriate exercise.

The present volume is an attempt to contribute to the public debate of these issues. It is organized into four sections. The first deals with histories. Paul Duguid examines the arrangements that have framed the practice of search beginning with the first Sumerian libraries. He sees two trends working across periods: on the one hand, the need to manage expanding volumes of information, which he sees as one of the drivers in the evolution of storage, organization, and search; and, on the other, an ongoing tension between practices of making information available more easily (in the name of freedom) and practices to restrict the ways it can be handled (in the name of quality). After this broad overview, Robert Darnton focuses on the history of how we have handled text and the transformations of the library as one of its core institutions. He begins with the observations that information has always been unstable, and that each age has been an information age in the sense that its particular ways of handling (textual) information was profoundly influential. This suggests that information never simply corresponds to an external reality but, rather, is always (also) the product of particular storage methods and retrieval procedures. Thus, Darnton asks, what is gained, and what is lost, as we move from one information processing system to another? With an eye on this instability, he concludes: “Long live Google, but don’t count on it living long enough to replace that venerable building with the Corinthian columns.”

Google and virtually all other search engines right now are based on link analysis, that is, they analyze the links pointing to a document in order to assess its relative importance. While this is often thought of as a major breakthrough, Katja Mayer shows that this sociometric approach itself has a long history. It was first developed in the early 20th century as a politically progressive approach to help small groups become aware of their own, often surprising internal dynamics. It was later transformed into a management technique to assess the sciences without having to deal with the intricacies of the arguments made in the ever-expanding number and size of its subdisciplines. The Science Citation Index, developed in the 1950s, seemed to offer a politically neutral, purely formal way of determining the importance of publications and scientists. This method, taken up by search engines, is now applied to all informational domains. Yet by becoming transparent, it is increasingly – like the citation index, too – subject to manipulation and its inherent limitations are exposed. The final chapter in this section is by Geert Lovink. He returns to Joseph Weizenbaum’s observation that “not all aspects of reality are computable” and asks how we can, in a historically and socially informed way, think about the flood of information that search engines try to make accessible for us. His advice: Stop searching. Start questioning!

The book’s second section focuses on liberties. Search engines empower individuals by making vast amounts of information available to them, yet their increasing centrality also makes them, voluntarily and involuntarily, an arbiter of these very freedoms. Claire Lobet-Maris examines the tradition of technology impact assessment to develop a framework of how to bring the technologies of search into the democratic debate and how to envision accountability. For her, three main questions need to be asked: one concerning equity (the equal chance to be found online), one concerning the tyranny of the majority (link analysis favoring the popular), and finally transparency (that is, the ability to assess and contest how search engines work). All this leads to a discussion of autonomy, which, as she indicates, might be reduced when search engines assume that the “clicking body”, i.e., the tracks that people leave and search engines collect, is more reliable than the “speaking body”, i.e. what people actually say about themselves. Joris van Hoboken approaches the question of liberties from the other side, looking at how search engines are being regulated in European law and how they are becoming targets for efforts to restrict access to information by being forced to censor their search results. Van Hoboken points out that the many legal gray zones that search engines must operate in are restricting the opportunities for new entrants in the field. One of the reasons Google can still provide its services is its large and powerful legal department, a prerequisite for working in this area. The third chapter in this section, by Felix Stalder and Christine Mayer, returns to the question of whether search engines enhance or diminish users’ autonomy. They approach this question via the issue of personalization, that is, of tailoring the services – to users and advertisers – based on extensive user profiles. Will this open the way to overcome the tyranny of the majority by making information available that is not generally popular, or will it decrease autonomy by strengthening surveillance and manipulation through social sorting?

The book’s third section directly examines issues of power. First, Theo Röhle employs concepts derived from Foucault and Actor–Network Theory to examine the subtle combination of rewards and punishment employed by Google to keep webmasters within bounds. This strategy, he argues, involves the establishment of a disciplinary regime that enforces a certain norm for web publishing. Turning to Google’s relationship to the users he finds forms of power that aim at controlling differential behavior patterns by gaining an intimate statistical panorama of a population and using this knowledge as a means of predictive risk management. Bernhard Rieder observes that discussions about the power of search engines are often hampered by the considerable distance between technical and normative approaches. To counter this, he develops a technologically grounded normative position that advocates plurality, autonomy, and access as alternative guiding principles for both policy and design. His conclusions are surprising: one of the most important ways to restrain the unchecked power of search engines might be to strengthen the ability of third parties to rerank the search results. We need to demand: access to the index! This would allow a wide range of actors to take advantage of the search engines’ vast infrastructure (which is extremely hard to replicate) while applying other methods of ranking (which is comparatively doable). Moving from a liberal to a more radical perspective, Matteo Pasquinelli examines how Google extracts value from individual actions and the common intellect and transforms it into network value and wealth. To frame how such value is accumulated and extracted, Pasquinelli sketches a concept of “cognitive rent”, which no longer needs intellectual property but, instead, is particularly attuned to “free culture” and “free labor”. The final paper in this section is by Konrad Becker. In a broad-ranging essay, he looks at the role of classification systems as techniques of power, highlighting that such “technologies of the mind are political philosophy masked as neutral code”.

The final section deals with issues of visibility. Richard Rogers observes a creeping “googlization” of the entire media landscape, that is, how approaches made dominant by Google are now being employed beyond its realm. These approaches are never purely technical but also necessarily political, if not by intent then certainly in effect. As one way of assessing these developments, he proposes studying “how subtle interface changes imply a politics of knowledge”. Metahaven, a studio for research and design, focuses again on sociometrics and the dominant paradigm underlying Google’s legendary PageRank method. However, rather than concentrating on the usual link-rich nodes, they ask “how a different take on the sociability of weak ties may bring a different appreciation of their relevance to networks”. Using advanced network theory, they contrast an approach focusing on the redundancy of the center with one that seeks particular positions at the periphery. They look at sites that bridge densely connected clusters and which might actually provide access to the greatest amount of information, since they bring together otherwise disconnected worlds. They use the Obama presidential campaign as well as the network structure of Al Qaeda as examples for the potential of focusing on unique weak ties rather than on redundant strong ones. The last chapter in this section is by Lev Manovich who reports on his ambitious new project. He leaves behind the dominant approach of trying to find the one right document. Rather, he argues for the need to analyze patterns hidden in large datasets. For the field of digital culture, he argues that we need to leave behind 20th century paradigms, which were developed on the basis of relatively small datasets that allowed close inspection of each item in isolation (think of a painting hanging on a white wall in a museum). Instead, he argues, we must come to terms with the fact that cultural development can no longer be reduced to a few privileged producers: it is now driven by the interactions of millions of producers, using the same tools and acting on the same information. Manovich thus abandons the logic of the results list and envisions new ways of how to follow global digital cultures. Pointing to new technological developments and breakthroughs in the field of organizing, classifying, and analyzing large datasets, this is where history meets the future. We are thus reminded that liberties always need to be defended and renegotiated and that visibility and invisibility must be in the service of autonomy and empowerment.

Acknowledgments


Many of the papers of this volume were first presented at the Deep Search conference, which took place in Vienna on November 8, 2008. The original presentations are available online as high-quality video streams.4 The conference was organized and conceptualized by the World-Information Institute, realized in partnership with international research network IRF (Information Retrieval Facility) and supported by Matrixware Information Services and the Austrian Federal Ministry of Education, Science and Culture. Producing a conference and a subsequent book is a task that cannot be realized without the help and generous support of many individuals involved in the process. We would like to thank all participants for their great contributions and the enthusiasm they brought to the conference and into the making of this book. We have also benefited greatly from Patrice Riemens, who put us in contact with ongoing research and publication projects that we might have missed otherwise. Ted Byfield provided valuable input and Christine Mayer and Aileen Derieg carefully edited all the papers.

Vienna, April 2009

Konrad Becker and Felix Stalder
WORLD-INFORMATION INSTITUTE


Notes

1 In the US, Google’s market share is 72%, in India 81%, in Germany, in Chile 93%, and in the
Netherlands 95%. Only in EastAsia Google is not dominant, in China its share is 26%, in Taiwan
it is 18% and in Korea it is as low as 3%. For details, see http://googlesystem.blogspot.
com/2009/03/googles-market-share-in-your-country.html
2 Eric Schmidt, Interview with Charlie Rose (March 6, 2009) http://www.charlierose.com/view/
interview/10131
3 Lee Rainie and Graham Mudd, “Search Engines: Project Data Memo”, Pew Internet & American
Life Project (Aug. 2004) Available at http://www.pewinternet.org/pdfs/PIP_Data_Memo_Searchengines.
pdf (“The average visitor scrolled through 1.8 result pages during a typical search.”)
4 http://world-information.org/wii/deep_search/

Translations

Deep Search. Politik des Suchens jenseits von Google - Einleitung (de)
Konrad Becker | Felix Stalder (April 2009)
Content type
text
Projects Deep Search. The Politics of Search beyond Google
Deep Search
World-Information Institute
Date April 2009

Tags

open access knowledge society information politics search Science Citation Index classification indexing information retrieval ranking search tools sociometry search engine online publishing PageRank database web3.0 Google Paul Duguid Robert Darnton Theo Röhle Joris van Hoboken Richard Rogers Lev Manovich Katja Mayer Claire Lobet-Maris Joseph Weizenbaum Geert Lovink metahaven Barack Obama Felix Stalder Eric Schmidt
No query in this session yet. Please use the tag map to the left to get a listing of related items.