A graduate seminar in cultural studies of science,
medicine and technology
Department of Communication,
The University of Memphis
Fall 2011
Dr. Marina Levina
Sorry all, having trouble posting with the other comments again.
Kate asks, “Do you agree with Terranova’s arguments of free labor regarding the internet?”
This is of current interest to me as I am writing a Marxist critique on “old media” this semester for one of my other classes. Because of that, I will try to draw out some key differences I noticed from how Terranova critiques free labor on the internet in comparison with how I critique “old media” – reality TV show – Undercover Boss.
I think there are several key differences in which Terranvova argues in “new media” that warrants a different approach of labor from Classic Marxist thought. First, that human intelligence differs from more traditional types of labor and therefore cannot be “managed” the same; second, from the chapters, I gather that Terranova is discussing computers as enhancing human productivity, instead of more traditional uses of machines – on assembly lines, etc. or of human’s bodies in factories/assembly lines; thirds, the internet is less transparent, with less boundaries and hard to pinpoint limits than in comparison to television; fourth, not must emphasis is placed on morality in the digital economy; fifth, biological computing functions in a bottom up system of power; and lastly, as we have been discussing in class this semester, from a Foucauldian perspective, control exists on the Internet, but exists at the level of self-regulation.
Terranova’s observations significantly differ with the arguments I making about a type of “old media.” Reality TV, as Terranova discusses, is geared towards the audience. Reality TV participants are labor members who usually are a part of the “cheap labor” market in which they are usually just experiments in a fixed, capitalist system. As Hasinoff (2008) found in her analysis on America’s Next Top Model and as I am arguing for my own essay, labor exists in a fixed capitalist system in which the tokened/picked winner(s) have the opportunity to rise out of a class division within capitalist structures. But, they can only do so, through individual success, and not collectively- thus reiterating a “Horatio Alger pull yourself up alone” narrative. This narrative of individualism is crucial to the lively structures of capitalism in a classic Marxist perspective. It stabilizes the system of power because very few people will actually be able to individually achieve a higher class rank. In my opinion, I see this perspective very different from Terranova’s because in her argument, it seems that “free labor” seems more collective – not necessarily people doing things collectively, but a collective, at least a fluidity, of free flowing information in space makes it hard to see how labor is transferred/exists from a classic Marxist perspective.
Also, on the internet, Terranova asserts that self-regulation occurs; I find this to greatly differ from a Marxist perspective in my analysis, along with countless other analyses of television. On reality TV, contestants usually do not regulate themselves; rather, at least on Undercover Boss and America’s NextTop Model, there is already a fixed system in place in which both workers and contestants know they have to abide by. Yes, reality TV participants make changes to their selves, but it comes from a top-down structure, either by the CEO or Tyra Banks and her judges. Lastly, while Terranova observes that the digitial economy doesn’t take great interest in morality, I find the political economy to care greatly about morality and base its capitalist structures around this notion. For example, getting back to the capitalist value of individualism, it is ingrained in the idea of moral values and what a citizen should do morally to stabilize these capitalist structures – that they are needed and serve a GOOD purpose for life. That is one narrative that comes from the top down by CEOs who talk to their employees on Undercover Boss.
So, to conclude, as Kate asks, Do I agree with Terranova’s position on “free labor” – Yes and No. While I think a Marxist approach is the best approach to critiquing media in general, especially “old media,” I see many relevant and possibly necessary points made by Terranova, especially the idea of the fluidity of information as not being able to be critiqued from a Classic Marxist Perspective. If only Marx was alive today to answer back to Terranova’s analysis and critique “new media” himself!
Monday, December 5, 2011
Network Culture: Politics for the Information Age
Free Labour
Terranova defines free labour, here, as, “excessive activity that makes the Internet a thriving and hyperactive medium” (Terranova, 73). The social factory or society-factory is defined as, “work processes have shifted from the factory to society thereby setting in motion a truly complex machine” (Terranova, 74). Today, the Internet acts as this multifarious organism producing a network milieu. “Simultaneously voluntarily given and unwaged, enjoyed and exploited, free labor on the Net includes the activity of building websites, modifying software packages, reading and participating in mailing lists and building virtual spaces” (Terranova, 74).
Much criticism of Marx’s conception of labor persists in discourse today. Donna Haraway identifies ‘informatics of domination’ as reflecting relationships between technology, labor and capital. Rejecting humanist positions, Gilroy, explains, “If labor is the humanizing activity that makes [white] man, then surely, this ‘humanising’ labor does not really belong in the age of the networked, posthuman intelligence” (Terranova, 74). Despite discussion, Terranova states that “the Internet does not automatically turn every user into an active producer and every worker into a creative subject” (Terranova 75).
Terranova defines the digital economy as a “specific mechanism of internal ‘capture’ of larger pools of social and cultural knowledge…. [and] an important area for experimentation with value and free cultural/affective labour” (Terranova, 76). With the digital economy came the New Economy. The New Economy marks “a historical period marker [that] acknowledges its conventional association with Internet companies, and the digital economy—a less transient phenomenon based on key features of digitized information” (Terranova 76).
Richard Barbrook posits that the digital economy is a mixed economy including a “public element, a market-driven element and a gift economy” (Terranova, 76). Don Tapscott defines the digital economy as, ‘a new economy based on the networking of human intelligence.’ He furthers, “Human intelligence, however, also poses a problem: it cannot be managed in quite the same way as more traditional types of labour” (Terranova, 78).
Resistance pursues due to the unquantifiable nature of ‘knowledge.’ Terranova refers to the Internet as a “consensus-creating machine, which socializes mass of proliferated knowledge workers into the economy of continuous innovation” (Terranova, 81). The knowledge worker remains confined to class formations and are not necessarily given elite status despite their contribution to capital. However, it remains unclear why some individuals qualify as knowledge workers while others do not.
Italian autonomist, Maurizio Lazzarato describes immaterial labour as referencing two distinct realms of labor. First immaterial labour refers to the informational content within the commodity. In this sense, labour is seen through direct action where skills typically involve cybernetics and computer control. (Terranova, 82). Second, Lazzarato identified immaterial labour as that which produces cultural, rather than informational, content. This type of labour is not work in its typical sense but may represent a range of activities that aide in fixing popular cultural preferences. (Terranova, 82). According to Lazzarato all citizens have a change to contribute immaterial labour as it is not fixed by class. “This means labour is a virtually (an undermined capacity) which belongs to the post industrial productive subjectivity as a whole” (Terranova, 83). Therefore, postmodern governments encourage the potentialities of work to the unemployed.
Terranova notes that the unemployed, “must undergo continuous training in order to be both monitored and kept alive as some kind of postindustrial reserve force” (Terranova, 83). The postmodern agenda, however, did not happen overnight. Lazzarato states, “The virtuality of this capacity is neither empty nor ahistoric; it is rather an opening and a potentiality, that have as their historical origins and antecedents the ‘struggle against work’…and in more recent times, the process of socialization, educational formation and cultural self-valorization” (Terranova, 83). This agenda represents capitalist motivations of optimizing its citizens within the labor force.
Levy asserts that networks “enable the emergence of a collective intelligence” (Terranova, 85). We no longer think according to the Cartesian model of thought based on singularity (I think), but to a collectivity of thought (We think). Levy defines collective intelligence as “a form of universally distributed intelligence, constantly enhanced, coordinated in real time, and resulting in the effective mobilization of skills” (Terranova, 85). This proposes a flexible and constantly changing epistemology. Levy continues explaining the means and the ends to collective intelligence. “The basis and goal of collectivize intelligence is the mutual recognition and enrichment of individuals rather than the cult of fetishized or hypostatized communities” (Terranova, 85). Computers, particularly accentuate the inherent value of human intelligence as man’s productivity reveals hidden creative potentials.
In Karl Marx’s, Fragment on Machines, knowledge becomes’ incarnate in the automatic system of machines’ where labor is only a link within a mechanical organism. However, Terranova points out that the Italian autonomists ‘eschew the modernist imagery of the general intellect as a hellish machine.’ (Terranova, 87). Instead they find general intellect to be a principal productive force which was manifesting before them. No longer was the intellect a hellish machine but rather an ensemble of knowledge…which constitute the epicenter of social production.
Humanists believe that the Italian autonomists neglected the idea of mass intellectuality of living labor as it articulates general intellect. Precisely, mass intellectuality is, “ an ensemble, as a social body—is the repository of the indivisible knowledges of living subjects and of their linguistic cooperation…an important part of knowledge cannot be deposited in machines, but…it must come in to being as the direct interaction of the labor force’ (Terranova,87-88).
Neither knowledge labor nor unemployment acts as collective knowledge. Knowledge labor is inherently collective; it is always the result of a collective and social production of knowledge. Capital’s problem is how to extract as much value as possible. Terranova notes both continuity and a break exists between, older media and new media as they relate to cultural and affective labor.
Continuity assumes the “common reliance on their public/ users as productive subjects” (Terranova, 88). However a split exists between the mode of production and in how power/knowledge acts in the two forms. The internet is highly decentralized and dispersed compared to television. Although old media also tapped into free labour, television and print media did so in a more structured way than seen in new media.Commercialization of the Internet is attributed as one of the key reasons for Barbrook’s gift economy. That is, increasing privatization and e-commerce create an economy of exchange.
The capitalistic logic of production becomes accelerated by ‘immaterial’ products Humanistic concerns remain as the real seems to disappear as the Internet grew. Specifically, hyperreality represents the humanist nightmare. Hyperreality is, “a society without humanity, the culmination of a progressive taking over the realm to representation.” (Terranova, 90).While the commodity seems to disappear, it does so not in the material sense but the quality of labor put into the commodity a subsidiary. Commodities become ephemeral works in progress. That is, no finished product likely exists but only those that are indefinitely in the process of becoming. Therefore, the quality of the commodity depends on the quality of the labor.
Sustainability of the Internet is intrinsically dependent on massive amounts of labor. The Internet then, is sustainable only so far as it is: ephemeral, updatable and possesses a mass collective labor. “The notion of users’ labour maintains an ideological and material centrality which runs consistently throughout the turbulent succession of internet fads” (Terranova, 91).
Open-source movement demonstrates the overreliance of the digital economy as such on free labour, both in the sense of ‘not financially rewarded and of ‘willingly given.' (Terranova, 93).Terranova asserts that digital work is “not created outside capital then reappropiated by capital, but are the results of a complex history where the relation between labour and capital is mutually constitutive, entangled and crucially forged during the crisis of Fordism” (Terranova, 94). “Free labour is a desire of labour immanent to late capitalism and late capitalism is the field which both sustains free labour and exhausts it” (Terranova, 94). Therefore, the Internet acts as both a gift economy and advanced capitalist society
Here, Terranova explores what is known as the ‘Old web v. New web’ debate. Television shows, which are increasingly becoming ‘people shows’ or ‘reality television’ rely primarily on the audience just as the Internet relies on user activity. Terranova notes that these programs, “manage the impossible, create monetary value out of the most reluctant members of the postmodern cultural economy: those who do not produce marketable style, who are not qualified enough to enter the fast world of the knowledge economy, are converted into monetary value through their capacity to affectively perform their misery” (Terranova, 95).
“The digital economy cares only tangentially about morality. What is really cares about is an abundance of production, an immediate interface with cultural and technical labour whose result is a diffuse, non-dialectical antagonism and a crisis in the capitalist modes of valorization as such"
(Terranova,96).
The Internet channels and adjudicates responsibilities, duties and rights. Open and distributed modes of production represent, “the field of experimentation of new strategies of organization that starts from the open potentiality of the many in order to develop new sets of constraints able to modulate appropriately the relation between value and surplus value” (Terranova, 96) Therefore, productivity is critical as it creates value for capitalism. Terranova refers to this relation of value and surplus value as the ‘entanglement of emergence and control’ (Terranova, 97).
Soft Control
Terranova defines biological computing as, “a cluster of subdisciplines within computer science—such as, artificial life, mobotics and neural networks” (Terranova, 99). Biological computing is essentially a bottom up organization through the stimulation of, ‘the conditions of their emergence in an artificial medium—the digital computer.” (Terranova, 99).
Lewis Mumford, writing in 1934, argued against the prevailing industrial technological ontology, hoping it would be replaced by a new technological age. Mumford viewed this transition as a return to the organic. “Human technicity does not so much construct increasingly elaborate extension of man but rather intensifies at specific points its engagement with different levels of the organization of nature” (Terranova,98). However, the artificial played a key role in Mumford’s predictive analysis. While nature materializes out of these interactions, the relationship is also artificial, that is, it is both inventive and productive. The network becomes a topological production machine if understood as a ‘spatial diagram’ for the age of computing.
Once biological computing possesses the ability to outperform the programmer and his instructions emergent phenomena arises. That is, because biological computing possesses no material center, leaderless numbers of elements that are only bound by their own protocols, therefore making the Internet an explicit instance and product. The self-organizing nature of the network represents a mode of production illustrated by an excess of value. Abstract machines of soft control surface acting as a, “diagram of power that takes as its operational field the productive capacities of the hyperconnected many.” (Terranova, 99). reconceptualization of life occurred as a biological turn in computing arose, focusing on natural and artificial bottom-up organization.
Artificial life theorists, Charles Taylor and David Jefferson support this form of organization. “The living organism is no longer mainly one single and complicated biochemical machine, but is now essentially the aggregate result of the interaction of a large population of relatively simple machines. These populations of interacting simple machines are working at all levels of the biophysical organization of matter” (Terranova, 101). Terranova summarizes the consequences of the aggregation of the simple as it organizes matter. “As a consequence, ‘to animate machines….is not to ‘bring’ life to a machine; rather it is to organize a population of machines in such a way that their interactive dynamics is ‘alive’” (Terranova, 101).
Artificial intelligence now seeks to study the activity of neural cells in the central nervous system (CNS). Artificial life theorists seek to reproduce the mind’s complex features such as being able to hold an indefinite memory. Gregory Bateson explains the current scientific conceptualization of the brain. (Terranova, 102)
“We may say that the mind is imminent in those circuits of the brain that are completely within the brain. Or that mind is immanent in circuits which are complete within the system brain plus body. Or finally, that mind is immanent in the larger system—man plus environment.” (Terranova,102).
Biological computation must concern itself with the power of the minute, that is, theorists’ measure biological computation as it is exterior and relational in nature. No finite determination or central control is possible due to the multitude of variables; therefore, systems are eternally dynamic. The ‘open system’ does not die or reproduce in a self-creational sense but “they are always becoming something else” (Terranova, 102). These systems are therefore highly unpredictable and are thereby difficult to control. The lack of centrism in an open system also disallows the dissection of the creature because, “once the connection and mutual affection with other elements is removed, the individual element becomes passive and inert” (Terranova, 104). Therefore, despite mass collection of data regarding individuals, the true dynamic of the network proves unattainable. However, despite uncertainty regarding control, these open systems do provide the potential of enormous productivity generated by the collective nature of the open systems within the network.
“More is different”
Fluidity is a critical concept in New Economy capitalism as fluidity relates bottom up organizations and speed. Moreness is “explicitly linked to the need for a different immanent logic of organization that demands new strategies of control to take advantage of its potentially infinite productivity while controlling its catastrophic potential” (Terranova, 106). Maintaining such dynamic fluid environments requires the identification of a certain ‘phase space,’ recognizable at a certain level of speed. John von Neumann linked evolutionary biology and computation when he devised a computational experiment named, cellular automata (CAs). CAs represented a “relatively new field that appeared in the midst of the intellectual dust that accompanied the development of the first digital computers” (Terranova, 109).
“CAs form dynamic milieus, space-time blocks, that have no real territorial qualities but do have rich topographies and challenging dynamics” (Terranova, 112). This occurs not because no central control exists, but because these cells are capable of spontaneous self-regulation.
Researcher, Stephen Wolfram, empirically classified CA dynamics using one-hundred runs of ‘the game of life.’ He subsequently catalogued four classes in which CAs fit. Class I CAs are those programs, which reached their computational limit or end point. Class II CAs, however, represent ‘limit cycles’ through self-replication of structures that glide across the ‘computational space’ by self-replication. Class III CAs produce fractured structures capable of self-replication like Class II CAs, but also possess the capability to scale and therefore progressively structure the CA. Finally, Class IV systems are highly chaotic, unstable and random. The random nature of this class results in no predictable time limits which make Class IV CAs highly unpredictable. Chris Langton subsequently reproduced Wolfram’s study but ran repeated runs of Ca systems thousands of time. As a result, Langton produced a new classification order but also a critical metric of measurement (the lambda). Lambda measures “the fluctuation of different CA systems with their relation to their computational abilities” (Terranova, 114). The lambda ranged from zero to one representing the most random systems incapable of computation (0) and the highly structured CA system also incapable of computation due to its inflexibility. Langton found that the key area of computation is identified with a border zone fluctuating between highly ordered and highly random CAs” (Terranova, 114).
Terranova first notes that fact that the CA run is ‘out of control,’ this does not make it ‘beyond control.’ “The fluidity of populations, their susceptibility to epidemics and contagion, is consider an asset: at a certain value or informational speed, the movement of cells turns liquid and it is this state that is identified as the most productive, engendering vertical structures that are both stable and propagating” (Terranova, 114). CAs depend on algorithms to survive.Genetic algorithms illustrate both “a mode on control and its limits” (Terranova, 115).
“Biological computation expresses a socio-technical diagram of control that is concerned with producing effects of emergence by a manipulation of the rules and configurations within a given milieu” (Terranova, 116).
Critics of the biological turn, posit that the Internet is too life-like. “To say that the Internet might be lifelike was the equivalent if sanctioning the ravages brought by rampant free-market capitalism on the ‘excluded masses’” (Terranova, 121). Terranova furthers, “These systems are not unstructured or formless, but they are minimally structured or semi-ordered” (Terranova, 121). Terranova defines this new biopolitical plane explaining it as that which, “can be organized through the deployment of an immanent control, which operates directly within the productive power of the multitude and the clinamaen” (Terranova, 122).
Critics argue that Richard Dawkins use of ‘selfish’ acts as an “apparatus of subjectification” (Terranova, 126). Franco Berardi introduces what he deems the “unhappiness factory” that results from the unhappy gene. The CBS television program, Big Brother is an example of Berardi’s ‘unhappiness factory’ as unlikely ‘contestants’ are secluded and forced to compete and relate.
Hardt and Negri define a multitude political mode of engagement that is located outside the majoritarian and representative model of modern democracies in their relation with the recomposition of class experience” (Terranova, 129). Multitude Franco Berardi “tendency to dissolution, the entropy that is diffused in every social system and which renders impossible the labour of power but also the labour of political organization” (130) Therefore it is critical to reconsider the exploit. That is, “hacking the multitude is still an open game.”
Discussion Questions:
In light of our ongoing discussion of control and power, how does 'free labour' work in capitalist societies? Terranova notes that free labor is not exploited labor. Do you agree with her position? How do you view free labor regarding the Internet?
Programs such as, Big Brother, offer an example of the theoretical 'unhappiness factory.' How does the video clip from Big Brother prove or disprove this theory?
In Network Culture, Terranova begins by discussing the “heterogeneous assemblage” of network culture. Terranova argues for the need to specifically and individually reflect on network culture because “they appear to us as a meshwork of overlapping cultural formations, of hybrid reinventions, cross-pollinations and singular variations” (p. 1-2). In particular, the interconnectedness of communication systems is not necessarily technological; rather, “it is a tendency of informational flows to spill over from whatever network they are circulating in and hence to escape the narrowness of the channel and to open up to a larger milieu” (p. 2). Terranova notes the change in observing what used to be called “media messages” – “the flow from a sender to a receiver,” is now countered by messages that “spread and interact, mix and mutate within a singular (and yet differentiated) informational plane” (p. 2). As information flows thru and from channels and mediums, and as it is decoded and recoded by local dynamics changes occur in its form – “it disappears or it propagates; it amplifies or inhibits the emergence of communalities and antagonisms” (p. 2). To reiterate, Terranova elaborates that the cultural production of meaning is mainly unattached from the larger informational processes that establish the dispersementof images and words, noises and affects across a hyperconnectedworld.
Terranova posits, “Are we then victims of an “informational explosion,” destructing humanity?" Terranova will argue that informational processes do not exhibit power of the ‘immaterial’ over the material; rather, because of an increase of history and annihilation of distances with an informational environment, this milieu is a “creative destruction” “composed of dynamic and shifting relations between such ‘massless flows’” that serves as productive movement “that releases (rather than simply inhibits) potentials for transformation” (p. 2-3, p. 8). Further, Terranova asserts that “a network culture is inseparable both from a kind of network physics (that is physical processes of differentiation and convergence, emergence and capture, openness and closure, and coding and overcoding) and a network politics (implying the existence of an active engagement with the dynamics of information flows” (p. 3).
In Chapters 1, Terranova begins to center her argument by first reworking the concept of information from the ideas that information “is the content of a communication;” and secondly, from the “notion that information is immaterial” (p. 3). Here, Terranova discusses three hypotheses from Claude E. Shannon’s (1948) essay in which he formed his mathematical definition of information: “information is defined by the relation of signal to noise; information is a statistical measure of the uncertainty or entropy of a system; information implies a nonlinear and nondeterministic relationship between the microscopic and the macroscopic levels of a physical system” (p. 9). Terranova stresses that these hypotheses also offer some other interesting corollaries – considerations – on informational cultures.
Proposition I: Information is what stands out from noise.
Corollary Ia: Within informational cultures, the struggle over meanings is subordinated to that over ‘media effects.’
Corollary Ib: The cultural politics of information involves a return to the minimum conditions of communication (the relation of signal to noise and the problem of making contact)
And secondly,
Proposition II: The transmission of information implies the communication and exclusion of probable alternatives.
Corollary II: Informational cultures challenge the coincidence of the real with the possible.
Here, Terranova argues:
“The communication of information thus implies the reduction of material processes to a closed system defined by the relation between the actual selection (the real) and the field of probabilities that it defines (the statistically probable). The relation between the real and the probable, however, also evokes the spectre of the improbable, the fluctuation and hence the virtual. As such, a cultural politics of information somehow resists the confinement of social change to a closed set mutually excluding and predetermined alternatives; and deploys an active engagement with the transformative potential of the virtual (that which is beyond measure)” (p.20).
The Internet in Network Time
In Chapter 2, Terranova uses the network example of the Internet to argue for the Internet as encompassing an active design technique “able to deal with the openness of systems – a neo-imperial electronic age – which is demonstrated “in phenomena such as blogging, mailing lists, and web rings” (p. 4). Here, Terranova stresses that communication technologies function beyond just linking different localities; more so, as we have briefly discussed when reading The Exploit, technologies “actively mould what they connect by creating new topological configurations and thus effectively contributing to the constitution of geopolitical entities such as cities and regions, or nations and empires” (p. 40). Because of the complex, interwoven features of the communication topology of Empire – such as aeroplanes, freight ships, television, cinema, computers and telephony, all these different systems correlate by converging in a hypernetwork, “a meshwork potentially connecting every point to every other point” (p. 41). Hence, the network is becoming less a description of a specific system, and more a phrase to define “the formation of a single and yet multidimensional information milieu – linked by the dynamics of information propagation and segmented by diverse modes and channels of circulation” (p. 41).
More specifically regarding the Internet, Terrenova asserts that if the Internet does appear as a key global communication technology, it is because, unlike other global communication technologies such as television, the Internet “has been conceived and evolved as a network of networks, or an internetwork, a topological formation that presents some challenging insights into the dynamics underlying the formation of a global network culture” (p. 41). Looking at the architecture of the Internet as a turning-point within the history of communication and using previous theories by Castells and Virilio, Terranova argues that space on the Internet is specifically in direct relation to its information architecture. Through the use of addresses and urls placed in a common address space, Terrenova argues that “we are to all effects referring to a specific address in this global, electronic map…which confirms the image of a distance between a world of information and a world of embodied and bounded locality” (p. 44). Thus, as mentioned in Terranova’s introduction, the Internet is highly homogeneous because “it can be entered at any point and each movement is in principle as likely as the next” (p. 44).
Furthermore, Terranova asks, “How can we reconcile the grid-like structure of electronic space with the dynamic features of the Internet, with the movements of information?...How do we explain chain mails and list serves, web logs and web rings, peer-to-peer networks and denial-of-service attacks?” (p. 49). Terranova argues the possibility that by contemplating the Internet through the concept of the grid, people might have “fallen into a classic metaphysical trap: that of reducing duration to movement, that is, of confusing time with space” (p. 50). And as far as the movements of information, Terranova observes that a slice of information spreading throughout the open space of the network “is not only a vector in search of a target, it is also a potential transformation of the spaced crossed that always leaves something behind – a new idea, a new affect (even an annoyance), a modification of the overall topology. Information is not simply transmitted from point A to point B: it propagates and by propagation it affects and modifies its milieu” (p. 51). Lastly, drawing from Hardt and Negri’s description of network power when discussing the Internet, Terrenova agrees that its imperial sovereignity is that “its space is always open…an active openness of network spatiality” (p. 62). In such space, all objects and devices can “be networked to the network of networks in a kind of ubiquitous computational landscape” (p. 63).
Questions for Contemplation: As stated above, Terranova observes that a slice of information spreading throughout the open space of the network “is not only a vector in search of a target, it is also a potential transformation of the spaced crossed that always leaves something behind – a new idea, a new affect (even an annoyance), a modification of the overall topology. Information is not simply transmitted from point A to point B: it propagates and by propagation it affects and modifies its milieu” (p.51).
We have discussed in class how information is transferred/exists in the network through websites such as 23and me and through banking systems. How else do you observe information being situated, managed, and transferred throughout the network? And secondly, how does Terranova's argument - from Chapters 1 and 2, concerning information theory influence how you might theorize the spacial activity of the network?
The section from Galloway and Thacker (Disappearance, or I've seen It All Before) about technological speed gave me the mental picture of quarterbacks throwing to a spot on the field before the receiver actually gets there. The quaterback is not actually throwing the ball to the receiver but to the point where he is expected to be in the future. With this image in mind while watching the Chiefs stink up the field on Sunday, this AT&T commercial was aired numerous times. I now have the line about putting videos on Facebook stuck in my head, but I think the commercial is good example of technology's speed affecting our realities.
The
authors discuss similarities between computer viruses and biological epidemics
within the context of how they are displayed. Forced to “care for the most
misanthropic agents of infection and disease, one must curate that which eludes
the cure” (p. 106) for if the disease was cured, there would be nothing left to
display. Therefore, the best curator would necessarily be the most careless,
the one incapable of exacerbating the virus. Caring for the virus vs. caring for those infected. The political
economy in the 1700s (see Ricardo, Smith, and Malthus) was based on a
correlation between the health of the population and the wealth of the country.
Today’s public health has moved toward the idea of caring for health information
and ensuring that “the biological bodies of the population correlate to the informatics
patterns on the screen” (p. 107). Statistics, rather than bodies, are central.
The
Datum of Cura II
Foucault notes
that curating entails a form of governance- caring for oneself would also benefit
others through self-transformation. Becoming the best individual you can be
will undoubtedly benefit society (so long as the best you can be is good- what
about people’s whose best still sucks?). Yet, self destruction is inherent with
self-transformation
http://www.guardian.co.uk/artanddesign/2009/sep/28/gustav-metzger-auto-destructive
. Liu calls this “viral aesthetics” where the distinction between production
and destruction are blurred. In the above link, an exhibit of Metzger’s work
involves the demolition of an automobile. Its destruction becomes the art. Curare (Latin for to care) is the point
where control and transformation intersect. “Is there a certain ‘carelessness’
to curare?” (p. 109)
Sovereignty
and Biology I
In politics, the body is often used as a
metaphor for political organization. For Plato, the primary threat to the body
politic was the move from concerns about justice to those “of wealth (oligarchy)
and concerns of appetites (democracy) (p. 109). The excess of these concerns
results is disease of the political body, similar to excess bile and phlegm in
the physical body. Lawmakers must work to avoid these symptoms altogether but
in the event they appear, should eradicate them “cells and all.” Our knowledge
of the physical body has changed since the time of Plato, so does this mean
that our understanding of the political body has also changed? Should it change?
Sovereignty
and Biology II
The model of sovereignty
proposed by Hobbes in Leviathan, where
citizens constitute the body and the sovereignty the head raises a fundamental
question of political thought: can a political collectivity exist without transferring
its rights to a supreme ruler? One of the ways that sovereignty maintains
political power is through the continual identification of biological threats.
In doing so, the sovereignty can justify enacting stronger controls over
citizens by protecting them from threats to the health of the population. The medicalization
of politics oversees the behaviors,
conduct, discourse, desires, etc. of biology occurs where discipline and
sovereignty meet, a place which challenges the between the good of the
individual and the good of the collective.
Abandoning
the Body Politic
The body politic
has two states: (1) Constitutive- where the body politic is assembled through
the “social contract” based on securing life (2) Dissolution- chaos, a return
to the “state of nature,” sovereignty of the people, the dark side of the
constitutive body. These two states feed into each through war. “Peace is waging a secret.” Abandoning the
body politic means deserting the military foundations of politics and also opening
the body to its own abandon. “What is left is an irremediable scattering, a
dissemination of ontological specks”
(p.111).
The
Ghost in the Network
Heterogeneous
network phenomena can be understood through the identification of commonalities
in shared particular patterns- “a set of relations between dots (nodes) and
lines (edges) (p. 112). Thus, organization gives shapes to matter and serves as
a means to inform (in-form). The living network can also be viewed in political
terms. There is not a central node that sits in the middle and
monitors/controls every link and node. A single node cannot break the web. A scale-free network is a web without a
spider. For politics to be viewed in a natural sense, networks need to be seen
as an unavoidable consequence of their evolution.
Birth
of the Algorithm
An algorithm
is a type of visible articulation of any given processor’s machinic grammar.
Political
Animals
Biology is a
prerequisite for politics (Aristotle). If the human being is a political
animal, are there also animal politics? Vocabularies of biology retain the
remnants of sovereignty: the queen bee, the drone. But, what about swarm
intelligence where there is no centralized power, but only an instance of
self-organization?
Sovereignty
and the State of Emergency
“Modern
sovereignty is based not on the right to impose laws but on the ability to
suspend the law, to claim a state of emergency” (p.115). Both sides in a state
of emergency rely on network management- either in destabilizing key nodes or
fortifying them.
Epidemic
and Endemic
The
distinction between emerging infectious disease and bioterrorism based on
cause: one was naturally occurring, and the other resulted from direct human
intervention has been muddled if not fully abandoned. In its place, the U.S.
government has developed an inclusive approach to biopolitics. Regardless of
the context, the role of the government is to alert and respond to biological
threats. What matters most is what is at stake, which is always, life itself.
To achieve this aim, “medical security” seeks “to protect the population,
defined as a biological and genetic entity, from any possible biological
threat, be it conventional war or death itself” (p.116). The biological threat
is always present.
Network
Being
Information networks are
often described as a “global village” or a “collective consciousness.” These
references to networks somehow being
alive bring up questions of what being actually means. The authors posit
two questions: At what point does the difference between “being” and “life”
implode? What would be the conditions for the nondistinction between “being”
and “life”? They recognize two problems with the distinctions: (1) Life
sciences are faced with anomalies where living organisms cross species barriers
and questions of what it means to be alive, as in the case of a virus. For
Heidegger, ‘life” and “being” are separate from but dependent upon each other. In
network science, however, the concept of “being” is arrived at from a privative
definition of “life.” In this sense, all networks form the same animal: a graph
or a network. In turn, network science can study all networks as the same type
of being. The impact of this view of network
being is confused. Does the experience of being in a network constitute
network phenomenology (where occurrences are shared)? Does it mean the
existence of properties that differ across networks? The only “life” that is
specific to networks is their “being” a network.
Good
Viruses (SimSARS I)
Network-based
strategies are being developed on all levels. Computer security uses network
solutions to address network threats in the world of online viruses. This
behind the scenes war is invisible to most users but is in constant motion. Similarly,
epidemiologists understand how infectious diseases are spread effectively through
a variety of networks. Public health agencies in turn, use these same networks
as instruments of awareness and prevention. Caution must be exercised in
utilizing these networks to avoid widespread panic or political hype. In the
post 9/11 United States, infectious disease and bioterrorism are inevitably
linked. In doing so, questions of a new biopolitical war or a new medical terror
are raised. “Good” viruses introduced to combat emerging infectious diseases
are administered through the same networks as the disease, but will only
succeed if its rate of infection is greater than the bad virus.
Medical
Surveillance (SimSARS II)
Developments
in medical surveillance are representative of the intensive nature of networks. The Center for Disease Control (CDC)
is working to develop “syndromic surveillance” where the goal is to implement a
real-time, nationwide system for detecting anomalies in public health data
which could signal a possible outbreak or bioterrorist attack. In this case, “an
information network is used to combat a biological network” (p. 121). Similar
the World Health Organization’s Global Outbreak Alert and Response Network
works to insure that potential threats (either naturally occurring or
intentionally caused) are quickly verified and information shared through the
Network. Medical surveillance in itself is not problematic, but could become
controversial when what constitutes “health data” is disputed. “In the informatic
mode, disease is always virtual (p.
122) which creates a permanent state of emergency. Disease is always looming
but kept just out of reach.
Feedback
versus Interaction I
Two models
typify the evolution of two-way communication in mass media. In the first model
(feedback), information flows in one direction- from the public to the institution.
Two-way communication (interaction) is seen in the second model. Here
communication occurs within a system of communicative peers where each peer can
physically affect another.
Feedback
versus Interaction II
Feedback and
interaction also correspond to two different models of control. Feedback
corresponds to the cybernetic model of control where one party is always the
controlling party and the other is the controlled party (television, radio). Interaction
corresponds to a networked model of control where decision making occurs
multilaterally and simultaneously. The authors argue that “double the communication leads to double the control” (p. 124)
through surveillance, monitoring, biometrics, and gene therapy.
Rhetorics
of Freedom
Technological
systems can either be closed or open. Closed systems are generally created by
either commercial or state interests (profit through control and scarcity).
Open systems are generally associated with the public and political transparency
(innovative standards in the public domain). Rather than focus on the
opposition between open/closed, the authors examine alternative logics of control. Open control logics use an informatic
(material) mode of control, while closed logics use a social model of control.
From this perspective, informatic control is equally as powerful (if not more
so) than social control.
A
Google Search for My Body
One is either
online and accounted for or offline and still accounted for. The body becomes a
medium of constant locatability surrounded by personal network devices. For an example, think of how your Facebook page tracks your internet habits. How many people have disconnected the chat function on Facebook or other Instant Messengers because they do not want others to know they are online?
Divine
Metabolism
Life-forms are
not merely biological but include social, cultural, and political forms as
well, but not all of these have an equal claim on life. Networks are the site
where control works through the continual relation to life-form.
The
Paranormal and the Pathological I
Conceptions of
health and illness have changed from a quantitative approach (illness as a
deviation from the norm) which requires a return to balance to a qualitative
one (disease is a different state than health) where medicine’s role is to
treat symptoms of the disease. However, the third transition, “disease as error,”
is the most telling. In this conception, disease is viewed as an error in the
organism. It does not manifest itself in testimony from the patient or in signs
expressed on the body. Rather, the disease exists only in itself. It is
everywhere and nowhere. Disease is an informatic expression that must be mapped
and decoded.
The
Paranormal and the Pathological II
Because disease
can occur as a mass phenomenon, includes modes of transmission and contagion,
and exists between bodies, it is necessary to evaluate diseases as networks. If
the processes that lead to an outbreak have no center and are multicausal, how
can they be prevented? Epidemics are both medical situations and political
ones. If epidemics are networks, the problem of multiplicities in networks is
the tension between sovereignty and control. Compounding this problem is sovereignty
found in the supernatural.
Universals
of Identification
Once universal
standards of identification are agreed upon, real-time tracking technologies
will increase. Space will become rewindable and archivable; the world will exist
as a giant convenience store. See these videos about gunshot tracking technology to get an idea of what that world will look like or how it is made possible:
Molecular
biology laboratories employ (at least) two networks to encode, recode, and
decode biological information: the informatics network of the Internet and the
biological network. The Internet allows for uploading and downloading of
biological information and brings together databases, search engines, and
specialized hardware. The biological network occurs in-between DNA and an array
of other proteins.
Unknown
Unknowns
From Donald Rumsfeld:
“Because as we know, there are known knowns; there are things we know we know.
We also know there are known unknowns; that is to say we know there are some
things we do not know. But there are also unknown unknowns—the ones we don’t
know we don’t know.” The unknown unknowns bring forth visions of death, fear,
and terror, the end of humanity.
Codification,
Not Reification
The new concern
for political problems is the extraction of abstract code from objects. The
process of bioprospecting, whereby unique genes are harvested for their
informational value has reduced individuals to digital forms, ignoring their
lived reality. Today’s impoverished populations are expected to give up their
labor power as well as their bodily information. “The biomass, not social
relations, is today’s site of exploitation” (p.135).
Tactics
of Nonexistence
“The question of
nonexistence is this: how does one develop techniques and technologies to make
oneself unaccounted for?” (p.135). Nonexistence is tactical for anything that
wishes to avoid control.
Disappearance;
or, I’ve Seen It All Before
Disappearance
is the by-product of speed. As technology gets faster, one’s physical and
biological self disappears amid a plethora of files, photos, video, and a
variety of Net tracking data. Bey proposes that nomadism is the response to
speed. A “temporary autonomous zone” (TAZ) is a temporary space set-up to avoid
formal structural control. Move before the cultural and political mainstream
knows what happened.
Stop
Motion
“The
question of animation and the question of ‘life’ are often the same question”
(p.138). Graph theory begins from the classical division between node and edge and in doing soprivileges space over time, site over duration (p.139). Networks
can only be thought of in a different way if animation (movement) becomes
ontological or a universal right.
Pure
Metal
Cultural
constructionism focuses more on the way that gender/sexuality, race/ethnicity,
and class/status form, shape, and construct a self than with the innate self or
identity. The human subject is decentralized in the process. What about the
nonhuman? Regardless of views on the nonhuman (incorporating or discorporating),
it continues to be negatively defined. The human is always the starting point
for comparison. How then are nonhuman elements that run through or found within
humans classified?
The
Hypertrophy of Matter (Four Definitions and One Axiom)
Definition 1. Immanence- the process of exorbitance,
of desertion, of spreading out
Definition
2. Emptiness- the space between
things; an edge
Definition
3. Substance- the continual
by-product of the immanence and emptiness; a node
Definition
4. Indistinction- the quality of
relations in a network
Axiom
1- “Networks have as their central problematic, the fact that they prioritize
nodes at the same time as they exist through the precession of edges” (p.143)
The
User and the Programmer
“User” is a
synonym for “consumer.” Programmer is a synonym for “producer.” Most legal
prohibitions are migrating away from the user model (being) toward programmers
(doing). Anyone can be a programmer if he or she chooses. More and more threats
to programming are seen in everyday life. Future politics will focus on use
over expression.
Interface
The authors
define interface as “an artificial
structure of differentiation between two media” (p.144). Differentiation occurs
whenever a structure is added to raw data. Data does not appear fully formed
and whole but instead gets its shape from social and technical processes.
Interface is how dissimilar data forms interoperate.
There
is No Content
Content cannot
be separated from the technological vehicles of representation and conveyance
that facilitate it. “Meaning is a
data conversion” (p.145). There is not content, only data.
Trash,
Junk, Spam
Trash is the set
of all things that have been cast out of previous sets, all that which no
longer has use. Junk is the set of all things that are not of use at this
moment, but may be of use at some time, and certainly may have been of use in
the past. Spam is an exploit. “Spam signifies nothing yet is pure signification” (p.146).
CODA:
BITS AND ATOM
Networks are always exceptional, in the
sense that they are always related, however ambiguously, to sovereignty.
This ambiguity
informs contemporary discussions of networks. Hardt and Negri describe the
multitude as a “multiplicity of singularities,” a collective group that remains
heterogeneous (the one and the many, sovereignty and multitude). Virno
recognizes that the multitude does not clash with the One; it redefines it.
Unity is not the State; rather, it is language, intellect, communal faculties
of the human race. The fact that the multitude is not “One” is its greatest
strength, giving it flexibility that centralized organizations lack.
Contemporary
analyses of “multitude” share significant affinities with Arquilla and Ronfeldt’s
analysis of “netwar.” In this intersection, political allegiances of Left and
Right tend to blur into a strange, shared concern over the ability to control,
produce, and regulate networks.
According to Arquilla and Ronfeldt, “netwar refers to an
emerging mode of conflict (and crime) at societal levels, short of military
warfare, in which the protagonists use network forms of organization and
related doctrines, strategies, and technologies attuned to the information age”
(p.151). Netwar can be waged by ‘good’ or ‘bad’ actors as well as through
peaceful or violent measures. The diversity and complexity of netwar makes them
an emerging form of political action. It takes a network to fight a network.
Despite
their political differences, both the concept of the multitude and the concept
of netwars share a common methodological approach: that “the ‘unit of analysis’
is not so much the individual as it is the network in which the individual is
embedded.
The authors believe what is missing from Hardt and Negri
and from Arquilla and Ronfeldt is a “new
future of asymmetry” (p.152). Resistance IS asymmetry. Formal sameness may
bring reform, but formal incommensurability (having no common basis) breeds
revolution. “Open” or “free” networks exhibit power relations regardless of the
power held by the individuals who comprise them.
At
this point, we pause and pose a question: Is the multitude always “human”? Can
the multitude or netwars not be human and yet still be “political”? That is,
are individuated human subjects always the basic unit of composition in the
multitude? If not, the we must admit that forms such as the multitude, netwars,
and networks exhibit unhuman as well as human characteristics.
These questions relate to the nature of constituent power
in the age of networks. Not all networks are created equal and often display
asymmetrical power relationships. But if no one controls the network, how do we
account for such differences and asymmetrics?
Our
suggestion may at first seem perplexing. We suggest that the discussions over
the multitude, netwars, and networks are really discussions about the unhuman
within the human.
The word “unhuman”
does not mean against human or antihuman. Rather, the use of the term is
designed to ponder whether these emerging forms go far enough in comprehending
the portions of ourselves that are not fully human.
Difficult,
even frustrating, questions appear at this point. If no single human entity
controls the network in any total way, then can we assume that a network is not
controlled by humans in any total way? If humans are only a part of the
network, then how can we assume that the ultimate aim of the network is a set
of human-centered goals?
At both the
macro and micro levels, it is easy to recognize elements in networks that
inhibit total control or total knowledge—computer viruses, infectious diseases,
viral marketing or adware, unforeseen interpersonal connections on social
networks, and various other biological and man-made phenomena
In
fact, it is the very idea of “the total” that is both promised and yet
continually deferred in the “inhumanity” of networks, netwars, and even the
multitude.
Networks are
constituted by the tension between the agency of individuals within the network
and the abstract “whole.”
The
network is this combination of spreading out and overseeing, evasion and regulation.
It is the accident and the plan. In this sense, we see no difference between
the network that works too well and the network that always contains exploits.
Another
perspective, however does see a great difference between successful networks
and networks that fail. These differing viewpoints are part of the reason why
Internet viruses and infectious diseases evoke such fear and frustration. Networks
show us the unhuman in the inhuman and illustrate that individual human subjects
are not the basic unit of network constitution.
For
this reason, we propose something that is, at first, counterintuitive: to bring
our understanding of networks to the level of bits and atoms, to the level of
aggregate forms of organization that are material and unhuman, to a level that
shows us the unhuman in the human.
Networks operate through continuous connections and
disconnections, but at the same time, they continually posit a topology. They
are always taking shape but remain incomplete.
The
unhuman aspects of networks challenge us to think in an elemental fashion. The
elemental is, in this sense, the most basic and the most complex expression of
a network.
The central concern of networks is no longer the action
of individuals or nodes in the network. Instead what matters more is the action
throughout the network, a dispersal of action that requires us to think of
networks less in terms of nodes (information) and more in terms of edges (space)—or
even in terms other than the entire dichotomy of nodes and edges altogether. “In
a sense, therefore, our understanding of networks is all-too-human…” (p.157).
Discussion
Questions:
1.The videos of gunshot tracking give us
an idea of how surveillance can be incorporated into society. How could a
program such as this be used to track biological happenings? What type of
universals would have to be agreed upon?
2.On page 118, the authors assert that
network science “seeks a universal pattern that exists above and beyond the particulars
of any given network. For this reason, network science can study AIDS, terrorism,
and the Internet all as the same kind of being—a network.” What are the
similarities between biological and man-made networks? Are there differences?