TAG MAPcontentquery
LOG

MAP BROWSING HISTORY

MAP LEGEND

CONTENT TYPES
Texts Videos Images Authors Projects
TAG TYPES
General Tags Technologies Authors Places Names
SPECIAL TYPES
Root Topics

Dissecting the Gatekeepers - Relational Perspectives on the Power of Search Engines

The advent of new media technologies is almost inevitably accompanied by discourses that oscillate between technological and social determinism. In the utopian version of these accounts, technology is seen as supporting democratic social dynamics, in the dystopian version it instead becomes a colonizing force that pre-structures individual and group behavior. Search engines, once heralded as empowering tools to navigate online spaces, now increasingly described as “evil” manipulators and data collectors, have been no exception to this rule.

The advent of new media technologies is almost inevitably accompanied by discourses that oscillate between technological and social determinism. In the utopian version of these accounts, technology is seen as supporting democratic social dynamics, in the dystopian version it instead becomes a colonizing force that pre-structures individual and group behavior. Search engines, once heralded as empowering tools to navigate online spaces, now increasingly described as “evil” manipulators and data collectors, have been no exception to this rule.

In light of the important role that search engines play in the online information environment, the growing concern about their power is certainly justified. It is questionable, though, whether dystopian technological determinism can provide adequate answers to these concerns. Due to their position in the midst of conflicting interests of users, advertisers and content producers, search engines represent an area of especially dynamic and ambivalent power relations. The determinist image of search engines as extremely powerful entities drastically reduces the complexity of these relations to a set of unidirectional effects.

In order to avoid such reductionist accounts, a more thorough reflection on the concept of power itself is required. Here, a relational perspective on power is advocated, drawing on the writings of Michel Foucault and recent approaches in Science and Technology Studies informed by Actor-Network-Theory. Rather than treating search engines as stable technological artifacts, the analysis seeks to map how different actors are involved in negotiating the development of search technology.

After outlining the theoretical framework, the paper engages in an in-depth discussion of the search engine Google, focusing on its relationship to webmasters and users. Due to its domination of the search market, Google plays a special role for directing the users’ attention towards the webmasters’ content. Attempts by webmasters to game the ranking system in order to boost the position of their websites are met by Google with a subtle combination of rewards and punishment. It is argued that this strategy involves the establishment of a disciplinary regime that enforces a certain norm for web publishing.

Google’s relationship to the users, on the other hand, is characterized by less invasive forms of power. By inserting itself deeply into the users’ information environment, Google can collect and analyze unprecedented amounts of user data. Google plans to use this data in increasingly sophisticated advertising schemes. It is argued that the modeling of segmented consumption behavior that these schemes are based upon involves a governmental form of power. It is a kind of power that aims at controlling differential behavior patterns by gaining an intimate statistical knowledge of a population and using this knowledge as a means of predictive risk management.

“Cut off the king’s head” – A reversal of perspectives

Following Michel Foucault, power needs to be treated as a relational concept. His demand that political theory needs to “cut off the king’s head”1 involves a critique of mechanistic, centralistic and linear notions of power. Analyses subscribing to such notions usually seek to describe how certain powerful actors manage to enforce their intentions against the will of others. Their aim is to discern sources of power and the effects that emanate from these sources.2 A relational perspective on power involves a reversal of this perspective – instead of identifying fixed points of sovereignty it seeks to map the relations that render the powerfulness of these entities possible in the first place.

Actor-Network-Theory (ANT) has been one of the primary vehicles for introducing relational concepts of power into the study of science and technology. Its advocate Bruno Latour sees sociology in danger of being “drunk with power”, since there are scholars “who confuse the expansion of powerful explanations with the composition of the collective.”3 According to Latour, instead of explaining social relations by picking out powerful entities, power has to be explained by the acts that constitute actors as powerful.4

The move to abandon mechanistic, centralistic and linear concepts of power opens up socio-technological environments for a subtler enquiry into their inner dynamics. Rather than simply addressing the effects of technology on society or vice versa, “[i]t is possible to say that techniques and actors […] evolve together.” 5 This shift of perspective produces a topological view on the relations between actors. Both the relations and the characteristics of the actors are seen as inherently dynamic. The current state of things is perceived as a temporary stabilization of these actor-networks. Stability is not taken for granted but is a phenomenon that needs to be explained.

Bowker/Star advocate this kind of “gestalt switch” in their analyses of technological infrastructure: “Infrastructural inversion means recognizing the depths of interdependence of technical networks and standards, on the one hand, and the real work of politics and knowledge production on the other.”6 ANT has been criticized for losing sight of power relations when merely reiterating success stories of technology adoption.7 However, as Bowker/Star show, ANT-inspired analyses do not have to be limited to this kind of consensusoriented approach. Rather, the concepts of “stabilization” and “irreversibility” can be employed analytically in order to reveal the micro-political negotiations between actors.

Lucas D. Introna’s call to “maintain the reversibility of foldings”8 adds a more explicit normative stance to this discussion. He envisions an analysis of technology whose task is [n]ot merely to look at this or that artifact but to trace all the moral implications (of closure) from what seems to be simple pragmatic or technical decisions – at the level of code, algorithms, and the like – through to social practices, and ultimately, to the production of particular social orders, rather than others.9 The “disclosive ethical archaeology” advocated by Introna aims at examining the kinds of agencies that are involved in points of (technological) closure and what developments are fostered by these constellations. The normative goal is to create a situation where these points of closure can be scrutinized and re-opened in order to allow other developments to emerge.

While the stabilization of an actor-network means that alternative trajectories are abandoned10, this does not mean that power only appears in the guise of suppressing these alternative trajectories. Again in line with Foucault, power should not exclusively be treated as inhibiting, as something that constrains the free unfolding of relations and discourse, but rather as a productive force that fosters certain kinds of relations rather than others. In ANT terminology, this productive aspect of power is contained in the concept of “translation”. Actors constantly seek to recruit other actors into their networks and to render them productive within the scope of their own “program of action”.11

Search engines as sites of the co-constitution of agency

Against the backdrop of the theoretical perspective outlined above, the following discussion takes a closer look at the search engine Google as a specific site of the co-constitution of socio-technological agency. To start with, some of the main actors involved in this negotiation process can be made out, along with their distinct “programs of action”.

The World Wide Web has considerably lowered the entry barriers to content production and dissemination, resulting in a substantial proliferation of available information. As Herbert Simon12 pointed out, such proliferation of information renders attention a scarce resource. Content producers devise different strategies in order to boost the visibility of their content. Depending on the type of content – e.g. private, scientific or commercial – this attention can be turned into different kinds of currency – social, scientific or actual currency.

The struggle for attention results in an abundance of information, threatening to destroy the users’ feeling of control over their information environment. The urge to avoid this kind of “narcissistic injury” creates a demand for technological interfaces that re-instate the user in a position of control (or at least create this illusion).13

Search engines enter the conflicting interests between webmasters and users by diverting both kinds of actors through their own network. Google has been most successful in this enrollment process, not least because its clean interface and comparatively spam-free results effectively re-installed the illusion of control for the users. In the language of ANT, Google has established itself as an obligatory passage point – both webmasters and users need to pass this point if they want to continue moving on in their program of action. However, such traveling through other actors’ networks involves a translation – a change of the conditions that constitute an actor.

Search engines’ position as intermediaries between content and users has called forth comparisons to gatekeeping processes in earlier media settings.14 Considering the above discussion on the question of power, however, the notion of the gatekeeper as an embodied entity, be it human or technical, seems oversimplified. By looking at the way actors negotiate the reciprocal recruitment processes, it is possible to arrive at a more nuanced picture of the power relations involved. The following sections focus on the negotiation process on the part of the webmasters and the users respectively.

Disciplining the webmasters

In recent years, Google has intensified its efforts to establish means of communication with webmasters. In August 2006, “Webmaster Central”15 was introduced, a collection of services and information aimed especially at webmasters. The site is accompanied by the “Webmaster Central Blog” as well as by forums where webmasters can post questions and discuss a range of issues relating to their content.

“Webmaster Tools” is one set of services included in the Webmaster Central. Here, Google makes some of the information they keep about websites available to the respective webmasters. They can access basic statistics about crawling frequencies and potential errors during the crawling process, and they receive information on how to adapt their content in order to make it more accessible to the Google crawlers.

One adaptation, which is encouraged within the Webmaster Tools, is the creation of a Sitemap, an XML-file that lists the URLs of a domain. Sitemaps provide the search engine with information about URLs they otherwise cannot know about, either because there is no link to them or because they are created dynamically from a database. Thus, by providing this information in a machinereadable format, webmasters partly compensate the shortcomings of the crawler technology.

Another service that is part of the Webmaster Tools is called “How Googlebot sees your site”. It allows webmasters to check the visibility of keywords within their content. The Webmaster Tools also provide a list of in-links, which lets webmasters check who linked to their content.

Questions like keyword visibility and link building belong to the practices of search engine optimization (SEO). Ever since the inception of search engines, webmasters have tried to devise techniques in order to improve the position of their content in the results. SEO techniques range from simple text adaptations to elaborate linking schemes and methods to direct crawlers and human visitors in different directions.16 For search engines, these practices are a cause for concern, since they disturb the models of relevancy that are incorporated within their ranking algorithms.

Instead of condemning SEO practices altogether, though, Google has been increasingly apt in advocating a certain version of content optimization that fits its own agenda. The company’s own SEO guide, published in November 2008, focuses on ways to adapt content which ultimately benefit the crawling, indexing and display capacities of Google itself.17 A sharp line is drawn between these “google-friendly” practices and more outright manipulative SEO techniques. In their quality guidelines, Google puts forth a set of rules for webmasters, which prohibit the use of SEO techniques such as doorway pages, redirects, hidden text and keyword stuffing.18 A violation of these guidelines is punished with a temporary ban from the index. If the content is not changed within a certain time frame, this ban is made permanent.

The Webmaster Tools can be seen as part of a strategy to associate webmasters and to normalize their behavior. The crawling statistics are used as an incentive to establish a communication channel between Google and the webmasters. Using this channel, the webmasters are encouraged to adapt their content in a way that is advantageous for Google. Further, webmasters are asked to report sites that do not comply with the rules set up by Google. For this purpose, the Webmaster Tools provide forms where webmasters can report spam and link selling.

Google comments on the success of this reporting scheme in the “Webmaster Central Blog”:

We are proud of our users who alert us to potential abuses for the sake of the whole internet community. We appreciate this even more, as Page- Rank™ (and thus Google search) is based on a democratic principle, i.e. a webmaster is giving other sites a “vote“ of approval by linking to it. In 2007 as an extension and complement of this democratic principle, we want to further increase our users’ awareness of webmaster practices that do or do not conform to Google’s standards. Such informed users are then able to take counter-action against webspam by filing spam reports.19

Due to its domination of access to online information, Google is able to delineate a norm for web publishing. The specific blend of cooperation and punishment observable in the Webmaster Tools is akin to establishing a disciplinary regime. Involving the webmasters themselves in the identification of spam reminds them of their own risk to get “caught”, should they engage in the wrong SEO practices. It thereby reinforces Google’s demarcation between “legitimate” optimization and “illegitimate” manipulation.

The crossing of this line is monitored and punished automatically, using formal definitions of spam that are developed by Google’s web spam team. Current research indicates that such automatic processing is bound to produce false positives, so that websites are banned from the index even when adhering to Google’s guidelines.20 Due to the high market share of Google these will remain invisible for most Internet users as long as the ban is in effect.

When taking the allusions to “democratic principles” in the Webmaster Central Blog at face value, the webmasters’ reporting of spam would have to be interpreted as a mandate for Google to enforce quality standards on the web. However, without any transparency in the processing of these reports and the way they are turned into algorithmic sorting criteria, this is hardly a coherent line of reasoning. While it could be argued that the web requires some kind of mechanism to identify spam in order to remain navigable, it is certainly questionable whether a commercial outfit like Google should be entrusted with this kind of task.

Governing the users

As has been amply demonstrated by Michael Zimmer21, Google is able to collect very large amounts of user data via a wide range of services. Considering the user base of the search engine itself, the reach of the advertising network Double- Click and the statistics service Google Analytics as well as the depth of user data available via services like Gmail and Orkut, it seems evident that today, Google’s ability to track users online is unmatched by any other Internet company.

Perhaps as a reaction to growing scrutiny by privacy advocates22, Google has recently become more outspoken about their reasons to collect user data. One of these stated reasons is that user data can be drawn upon in order to enhance ranking algorithms.23 Search providers are increasingly trying to determine what the users “really” want to know, in order to improve the subjective relevance of results.24 Since many search queries are poorly formulated, more and more data is taken from the context of the search query, allowing for a range of additional “signals” to be incorporated into the ranking algorithm.

From event-based data to derived data

Completely personalized ranking models like those implemented in Google Web History25 involve the most elaborate matching techniques. Personalized search involves setting up a personal profile for an individual user and ranking results according to this profile.26 A patent application for Google’s personalized search services27 reveals what kind of data is stored in these profiles. The individual log entries that are collected during the use of Google’s search engine are termed “event-based data” and include:
• search queries
• IP addresses
• result clicks (i.e., the results presented in a set of search results on which the user has clicked)
• ad clicks (i.e., the advertisements presented to the user on which the user has clicked)
• browsing data (e.g., which locations a user visits; an image that the user views) and
• product events (e.g., searches for product reviews)

Additionally, other kinds of user activity, which is not part of searching itself, can be tracked. This includes:
• advertisement presented and clicked on during an email session
• instant messaging
• word processing
• participation in chat rooms
• software application execution
• Internet telephone calls

These individual data items are connected in different ways in order to create a model of the individual user’s preferences, which can be employed to re-rank results. As an example of what Google calls “derived data”, the patent application describes how search queries can be matched to the topics of the Open Directory Project. By aggregating the queries over time, they can be processed into a weighted set of topic descriptors. This set serves as a model for topical user preferences and results are ranked accordingly.

From individual to collective

As has been shown in the case of AOL search queries that were released to the public in 2006, data gathered by search engines can reveal intimate details about real-life individuals.28 It therefore involves many potential abuse scenarios, not only by Google itself, but also by other actors, such as governments, criminals, hackers, etc. Such abuse scenarios serve as important argumentative tools to gauge the potential impact of surveillance practices and to devise adequate policy strategies. On the other hand, these scenarios are often limited to individualistic concepts of privacy. They are fuelled by a concern that surveillance makes it possible to compile dossiers on individuals and to use this information in a discriminatory manner.

However, data collection on a scale like Google’s is problematic even if it is not used in order to trace individuals. The collected data allows Google to run extensive statistical evaluations in order to construct models of user behavior and preferences. Surveillance scholar Oscar Gandy draws a historical analogy between such kinds of data mining techniques that are made possible by largescale computing infrastructures and the invention of the microscope in the natural sciences. In both cases, the technical infrastructure involves a change of perspective that renders new kinds of classifications and typologies possible.29

In its corporate blog, Google stresses the advantage of modeling user behavior for ranking purposes.30 A simple model of user behavior that has been in use for a long time involves the assumption that users prefer results in their own language. Thus, results are re-ranked based on an analysis of the user’s IP address and the language of the query. Other models involve time as a factor. For example, if the collected data shows that most users prefer fresher results for a certain query, these kinds of results are ranked higher for subsequent queries.31

Information Retrieval and ad targeting

While Google has made an effort to explain why user data is needed for the development of ranking algorithms, they have been rather quiet about the commercial use of the collected data. In one of their earliest papers, the Google founders rejected such use, stating that they “believe the issue of advertising causes enough mixed incentives that it is crucial to have a competitive search engine that is transparent and in the academic realm.”32 In the course of Google’s development from an academic experiment to the world leader in online search, there has been a remarkable shift in this attitude. Today, Google-CEO Eric Schmidt wants people to think of Google “first as an advertising system”.33

The basis of Google’s economic success is contextual advertising. Initially developed by Goto.com, a search engine for consumer goods, the system lets advertisers place bids for certain keywords. Whenever someone enters a search query containing the keyword, their ad is displayed.34 When Google adopted the practice of contextual advertising, they chose to display these sponsored results separately from the so-called “organic”, algorithmically selected results. It is questionable, though, whether this separation has made the problem of “mixed incentives” disappear. A study of search engine users in 2005 found that 62 percent of them were not actually aware of the distinction between paid and unpaid results.35

Contextual advertising thus inserts itself into the flow of online information behavior and facilitates the real-time translation of information needs into consumption needs. This translation is possible because search queries are reasonably good representations of information needs. If the ad is matched well enough to the query, the hope is that the user will accept the commercial message as an adequate answer to their information need. However, poorly formulated search queries pose a problem for the functionality of this system as well. Even if ads are matched very well to the search query, they might not be matched very well to the user’s actual information need. The search providers’ answer is, again, to draw on the aggregated user data in order to alleviate this problem.

Modeling and predicting behavior

As in the context of ranking, the way to determine what the user “really” wants to know is to construct models out of the aggregated user data. Individual eventbased log entries are bundled together according to certain models of individual and collective behavior. As Eric Picard points out, these kinds of segmented models present advertisers with new ways to reach particular audiences:

Tracking what someone is searching for online or which sites they visit will create an anonymous profile of that person’s interests. Those interests can be segmented out and compared to advertiser goals. Then the ads can be delivered to the right person across all media.36

The technique to first construct segmented models of user interests and behavior and then match tracked user data to these segmented models is known as “behavioral targeting”. While it has been employed by large search providers like Yahoo since 2000, Google arrived relatively late at the scene with the acquisition of DoubleClick in 2007. In their privacy information, Google emphasizes that these techniques are “new territory” for the company.37 However, two patent applications reveal extensive plans to explore this territory in the future.

Firstly, Google wants to give advertisers the choice of a range of “targeting criteria” for the display of their ads. These include, among others, “geographic location, the language used by the user, the type of browser used, previous page views, previous behavior, user account, any Web cookies used by the system, user device characteristics, etc.”.38 While it would be technically feasible to let advertisers choose all these criteria individually, behavioral targeting usually implies selecting certain combinations of them. These segments, e.g. topical clusters or behavior patterns, can then be reached with specially targeted ads.

A patent application describing “network node targeting” provides more details about the operational layout of such systems. Based on social network data, the aim of this targeting method is to discover communities of users with shared interests and then to identify the most influential members of these communities. By letting advertisers place their ads on the profile pages of these influential members, the idea is that they can “target the entire community by displaying advertisements on the profile of member 1 alone”. There can also be additional incentives for the influential members to maintain their position in the network: “An influencer may receive financial incentives from advertisers in exchange for permission to display advertisements on the member’s profile.”39

This example reveals several key aspects of behavioral targeting practices. Firstly, it shows how targeting involves a specific topological ordering of a continuous information space. “Network node targeting” implies that user data is analyzed according to a particular model of what a community looks like and how influence is distributed within a community. Choosing this model of analysis thus involves focusing on certain relations between data items and ignoring others. Secondly, this kind of ordering involves ascribing a specific kind of meaning to certain combinations of data items. The influential members are addressed primarily as efficient disseminators of commercial messages. Such role ascriptions have social relevance, since they are components of individual and collective subjectivation processes. Thirdly, this example shows how the model reinforces itself. By financially encouraging the influential users to consolidate their position within the network, “network node targeting” in the long run confirms the assumptions about sociality it was built upon in the first place.

“Continuity with difference”

Behavioral Targeting thus involves tracking users, constructing segmented models out of the aggregated data and feeding these models back to the users. This kind of segmentation for marketing purposes has been discussed from different angles by scholars of surveillance studies.40 Instead of portraying surveillance as an exclusively disciplinary endeavor, these scholars have stressed the dispersed and networked characteristics of current surveillance practices.

The kinds of power relations at stake in segmentation can best be understood by drawing on Foucault’s later writings on governmentality. Here, he describes power in terms of the “conduct of conduct”. Rather than simply enforcing a norm, this more indirect mode of power involves structuring a field of possibilities. It aims at an intimate statistical knowledge of a population in order to be able to predict behavior and adapt control strategies accordingly.41

Consequently, surveillance scholar Bart Simon argues that the creation of “databased selves” involves a shift from discipline to control. Drawing on Deleuze42, he remarks:

Discipline as a mode of power relies primarily on enclosures, be they material, cultural or psychical. Control however encourages mobility in an attempt to manage the wider territory and not just the social space of enclosures.43

Compared to discipline, control thus allows for a greater flexibility towards difference. The governing of consumers is less concerned with imposing forms of behavior on them and more with capturing their actions within a controllable framework.

The flexibility towards difference enables what Tom Hespos44 calls “marketing to the unmassed”. Whereas the loss of social cohesion and the differentiation of consumption patterns could pose a threat to the continuity of capitalist accumulation, the constant flow of user data into search engines and the corresponding segmented targeting techniques provide an interface for ensuring consumer predictability. It is part of a risk management strategy where the subjects are “fully integrated into a living machine that functions not against their will, their thoughts, their desire, their body, etc., but through those.”45

Conclusions

The analysis of Google as a site of the co-constitution of socio-technological agency has revealed the emergence of two different kinds of power relations. On the one side, webmasters are enrolled into a disciplinary regime where transgressions of the norm are punished by exclusion from user attention. Encouraging conforming behavior and letting webmasters participate in identifying deviations amounts to a strategy that aims at the internalization of this norm.

On the other side, the constant monitoring of user behavior as the basis of segmented targeting techniques represents a governmental form of power. The behavioral models that are constructed out of the aggregated data locate users within certain kinds of consumption patterns. Feeding these models back to the users renders this “statistical inclination” productive.46 It creates a more predictable and controllable field of possible actions, ensuring continuity while accepting difference.

Returning to the initial question of stabilization and irreversibility, it has to be asked whether the different actors have accepted the role ascriptions offered to them or whether they manage to keep up a process of negotiation. While webmasters hardly have another choice than using Google to reach an audience, their ongoing attempts to game the ranking through SEO techniques show that Google’s publishing norm is a contested norm. Rather than being able to induce technological closure, Google has to devise flexible strategies to react and adapt to these challenges.

Considering their involvement in reporting spam, webmasters have to ask themselves whether they want Google to be the institution responsible for enforcing quality standards on the web. This also involves the question of whether there could be different solutions to this problem and how they would fit into the development of non-commercial search alternatives.

The collection of user data, on the other hand, can be placed in the context of a broader re-negotiation of the boundaries between private and public as well as commercial and non-commercial on the Internet. Again framing this process in terms of the stabilization of role ascriptions, the first question is on what basis negotiations can take place at all. Users are usually left in the dark about the ways value is created from their data. Without technical means to access, edit and erase the data, they are not in a position to withdraw, either partially or completely, from the collection process.

Technically speaking, there is a need for interfaces that provide users with a greater level of interaction with their data. On the level of discourse, these developments should be accompanied by a new privacy vocabulary (e.g. as suggested by Helen Nissenbaum47) in order to be able to address privacy concerns on a more granular level. For users who prefer to opt out completely, the development of privacy enhancing techniques, e.g. those that obfuscate data traces or help to avoid tracking altogether, should be encouraged.

A combination of such technical and discursive measures could be a starting point for strengthening the users’ position in the negotiation process vis-à-vis search engines and advertisers. Depending on the sophistication of these measures, they might even be able to show escape routes out of the controlled spaces of user segmentation and targeting.

Notes

1 Michel Foucault, Power/Knowledge. Selected Interviews and Other Writings 1972-1977 (Brighton: Harvester Wheatsheaf, 1980), 121.
2 Stewart R. Clegg, Frameworks of Power (London: Sage, 1989), 153-159.
3 Bruno Latour, Reassembling the Social. An Introduction to Actor-Network-Theory (Oxford; New York: Oxford University Press, 2005), 261. As Jens Schröter argues, this danger of overly “powerful” explanations also applies to parts of German media theory when, after de-centering the human subject, it seeks to reinstate media technology itself as a new “technological subject”.  (see Jens Schröter, “Der König ist tot, es lebe der König. Zum Phantasma eines technologischen Subjekts der Geschichte”, in Reale Fiktionen, fiktive Realitäten: Medien, Diskurse, Texte, ed. Johannes Angermüller, Katharina Bunzmann and Christina Rauch (Hamburg: LIT, 2000), 13- 24.)
4 Bruno Latour, “The Powers of Association,” in Power, Action, and Belief: A New Sociology of Knowledge?, ed. John Law (London: Routledge & Kegan Paul, 1986), 264-80.
5 Michel Callon, “Variety and Irreversibility in Networks of Technique Conception and Adoption”, in Technology and The Wealth of Nations. The Dynamics of Constructed Advantage, ed. Dominique Foray and Christopher Freeman (London, New York: Pinter Publishers, 1993), 250.
6 Geoffrey C. Bowker and Susan Leigh Star, Sorting Things Out: Classification and its Consequences (Cambridge, MA: MIT Press, 1999), 34.
7 Thomas Berker, “The Politics of ‘Actor-Network Theory’. What can ‘Actor-Network Theory’ Do to Make Buildings More Energy Efficient?”, Science, Technology and Innovation Studies 1 (2006): 61-79, http://www.sti-studies.de/fileadmin/articles/berker-politicsofant-2006.pdf (accessed December 20, 2008).
8 Lucas D. Introna, “Maintaining the Reversibility of Foldings: Making the Ethics (Politics) of Information Technology Visible”, Ethics and Information Technology 9 (2007): 11-25.
9 Introna (2007): 16.
10 Callon (1993)
11 Bruno Latour, “Technology is Society Made Durable”, in A Sociology of Monsters. Essays on Power, Technology and Domination, ed. John Law (London: Routledge, 1991), 103-131.
12 Herbert Simon, “Designing Organizations for an Information-Rich World,” in Computers, Communications, and the Public Interest, ed. Martin Greenberger (Baltimore, London: Johns Hopkins Press, 1971), 39-72.
13 Hartmut Winkler, “Search Engines: Meta-Media on the Internet?” in Readme! Filtered by Nettime: ASCII Culture and the Revenge of Knowledge, ed. Josephine Bosma (New York: Autonomedia, 1999), 29-37.
14 For example: Jens Wolling, “Suchmaschinen – Gatekeeper im Internet,” Medienwissenschaft Schweiz 2 (2002): 15-23.
15 Google. “Google Webmaster Central.” http://www.google.com/webmasters (accessed December 20, 2008).
16 Elizabeth van Couvering, “The Economy of Navigation: Search Engines, Search Optimisation and Search Results,” in Die Macht der Suchmaschinen. The Power of Search Engines, ed. Marcel Machill and Markus Beiler (Cologne: Herbert von Halem Verlag, 2007), 115-119.
17 Google. “Search Engine Optimization Starter Guide” http://www.google.com/webmasters/docs/ search-engine-optimization-starter-guide.pdf (accessed December 20, 2008).
18 Google. “Webmaster Guidelines – Webmaster Help Center” http://www.google.com/support/ webmasters/bin/answer.py?answer=35769&topic=15260 (accessed December 20, 2008).
19 Google, “An update on spam reporting”, Official Google Webmaster Central Blog, posted March 28, 2007 http://googlewebmastercentral.blogspot.com/2007/03/update-on-spam-reporting. html (accessed December 20, 2008). 20 See Richard Rogers in this volume.
21 Michael Zimmer, “The Gaze of the Perfect Search Engine. Google as an Infrastructure of Dataveillance”, in Web search. Multidisciplinary Perspectives, ed. Amanda Spink and Michael Zimmer (Berlin; Heidelberg: Springer, 2008), 77-99.
22 For example: Center for Democracy and Technology. “Search Privacy Practices: A Work In Progress. CDT Report – August 2007” http://www.cdt.org/privacy/20070808searchprivacy.pdf (accessed December 20, 2008).
23 Hal Varian, “Why Data Matters”, Official Google Blog, posted April 3, 2008 http://googleblog. blogspot.com/2008/03/why-data-matters.html (accessed December 20, 2008).
24 In a study of concepts of “quality” among search engine producers, Elizabeth van Couvering observes: “Relevance has changed from some type of topical relevance based on an applied classification to something more subjective.” (Elizabeth Van Couvering, “Is Relevance Relevant? Market, Science, and War: Discourses of Search Engine Quality”, Journal of Computer-Mediated Communication 12 (2007), http://jcmc.indiana.edu/vol12/issue3/vancouvering.html (accessed December 20, 2008). See also Zimmer (2008) on the broader context of what he calls “the quest for the perfect search engine”.
25 Google, “Web History” www.google.com/psearch (accessed December 20, 2008).
26 Kevin Keenoy and Mark Levene, “Personalisation of Web Search,” in Intelligent Techniques for Web Personalization, ed. Sarabjot Singh Anand and Bamshad Mobasher (Berlin, Heidelberg, New York: Springer, 2005), 201-28.
27 Andrew Fikes et al., Systems and methods for analyzing a user’s web history, US Patent Application: 11097883 (2005)
28 Michael Barbaro and Tom Zeller jr., “A Face is Exposed for AOL Searcher No. 4417749”, New York Times, August 9, 2006.
29 Oscar H. Gandy, The Panoptic Sort. A Political Economy of Personal Information (Boulder, Colorado: Westview, 1993), 83.
30 Google, “Introduction to Google Search Quality”, Official Google Blog, posted May 20, 2008 http://googleblog.blogspot.com/2008/05/introduction-to-google-search-quality.html (accessed December 20, 2008).
31 Anurag Acharya et al., Information retrieval based on historical data, US Patent No 7346839 (2003)
32 Sergey Brin and Lawrence Page, “The Anatomy of a Large-Scale Hypertextual Web Search Engine”, http://infolab.stanford.edu/~backrub/google.html (accessed December 20, 2008).
33 Fred Vogelstein, “As Google Challenges Viacom and Microsoft, Its CEO Feels Lucky,” Wired, (2007), http://www.wired.com/print/techbiz/people/news/2007/04/mag_schmidt_qa (accessed December 20, 2008).
34 The procedure for determining the position of particular ads is more complicated, since it partly depends on an analysis of the landing page and click-through-rates. (see Google, “How are ads ranked?” http://adwords.google.com/support/bin/answer.py?hl=en&answer=6111 (accessed December 20, 2008).)
35 Deborah Fallows, “Search Engine Users. Internet Searchers are Confident, Satisfied and Trusting – But They are also Unaware and Naïve”, http://www.pewinternet.org/pdfs/PIP_Searchengine_ users.pdf (accessed December 20, 2008)
36 Eric Picard, “Hyperlinking and Advertising Strategy”, in The Hyperlinked Society: Questioning Connections in the Digital Age, ed. Joseph Turow and Lokman Tsui (Ann Arbor: The University of Michigan Press, 2008), 163.
37 Google, “Privacy at Google”, https://services.google.com/blog_resources/google_privacy_booklet. pdf (accessed December 20, 2008).
38 Jayesh Sharma, Using search query information to determine relevant ads for a landing page, US Patent Application: 11323303 (2005)
39 Terrence Rohan et al., Network Node Ad Targeting, US Patent Application: 0080162260 (2006)
40 Kevin D. Haggerty and Richard V. Ericson, “The surveillant assemblage”, The British Journal of Sociology 51 (2000): 605-22; Greg Elmer, Profiling Machines. Mapping the Personal Information Economy (Cambridge, Mass.: MIT Press, 2004)
41 Nikolas Rose and Peter Miller, “Political Power Beyond the State: Problematics of Government”, British Journal of Sociology 43 (1992): 173-205.
42 Gilles Deleuze, “Postscript on the Societies of Control,” October 59 (1992): 3-7.
43 Bart Simon, “The Return of Panopticism: Supervision, Subjection and the New Surveillance,” Surveillance & Society 3 (2005): 15 http://www.surveillance-and-society.org/Articles3(1)/return. pdf (accessed December 20, 2008). In a similar vein, Adam Arvidsson identifies a paradigm shift from “containment” to “control” in the development of marketing during the 1950s. After this shift, the role of marketing “was no longer understood primarily as that of disciplining consumer demand, but rather as that of observing and utilizing ideas and innovations that consumer’s [sic] themselves produced.” (Adam Arvidsson, “On the ‘Pre-history of the Panoptic Sort’: Mobility in Market Research,” Surveillance and Society 1 (2003): 456-74 http://www.surveillance-and-society. org/articles1(4)/prehistory.pdf (accessed December 20, 2008).)
44 Tom Hespos, “How Hyperlinks Ought to Change the Advertising Business”, in The Hyperlinked Society: Questioning Connections in the Digital Age, ed. Joseph Turow and Lokman Tsui (Ann Arbor: The University of Michigan Press, 2008), 149.
45 Frédéric Vandenberghe, “Deleuzian Capitalism,” Philosophy & Social Criticism 34 (2008): 877.
46 Jordan Crandall, “Between Tracking and Formulating”, Vector 21 (2008), http://www.virose.pt/ vector/b_21/crandall.html (accessed December 20, 2008).
47 Helen Nissenbaum, “Privacy as Contextual Integrity”, Washington Law Review 79 (2004): 119- 57.

Content type
text
Projects Deep Search. The Politics of Search beyond Google
Deep Search
World-Information Institute
Deep Search conference 2008
Date 2008

Tags

power relations consumer behavior Actor-Network Theory search tools search engine Michel Foucault Theo Röhle Bruno Latour
No query in this session yet. Please use the tag map to the left to get a listing of related items.