Everyday Life As a Text: Soft Control, Television, and Twitter

By Michael Lahey

Published February 22, 2016 by SAGE Publications

Abstract

This article explores how audience data are utilized in the tentative partnerships created between television and social media companies. Specially, it looks at the mutually beneficial relationship formed between the social media platform Twitter and television. It calls attention to how audience data are utilized as a way for the television industry to map itself onto the everyday lives of digital media audiences. I argue that the data-intensive monitoring of everyday life offers some measure of soft control over audiences in a digital media landscape. To do this, I explore “Social TV”—the relationships created between social media technologies and television—before explaining how Twitter leverages user data into partnerships with various television companies. Finally, the article explains what is fruitful about understanding the Twitter–television relationship as a form of soft control.

Introduction

A “deluge of data” is a common theme across contemporary business and consumer cultures in the United States. Digital technologies allow those with resources and know-how to create, track, and sort enormous sets of data whether they be global trends on heart disease or the “cacophony of short-burst” communications that define the social media platform Twitter (Carr, 2010). We see the emergence of products like Google Glass, eyeglasses that overlay data on top of our daily experiences. Through the glasses, we can video chat with friends, interface with Google maps, or have updates pushed to us about, for instance, suspended subway services. A likely fix in a world full of data: more data to solve problems of more data.

Data are ubiquitous in IBM’s “Smarter Planet” advertisements about the coming “tsunami of information”—Radio-Frequency Identification (RFID) chip transmissions, store transactions, medical records, emails, photos, videos, blogs, traffic patterns, and so on. One of the commercials in the series asks, “What if technology could capture all this information and turn it into intelligence?” IBM could help you identify patterns faster and “pull insights from the noise.” The company could help organizations “manage their people” and “mitigate risk.” And, most importantly, they can help you “convert data into action” (infoondemand, 2009).

Charles Duhigg’s (2012) New York Times Magazine piece “How Companies Learn Your Secrets” offers us another look into the mysterious world of data. Duhigg tells the tale of how Target uses data science to better understand its customers. One of the more humorous and alarming examples is how Target cross-references purchases to figure out when a family is pregnant, even though they have not told Target. Because these algorithmic calculations are based on shopping data, Target can better target ads and discounts more strategically to these families.

These are just a few examples of the way our everyday lives are being translated into data. On one hand, access to data is cast as a coping mechanism for a world overrun with data. On the other hand, these data are cast as a treasure trove for businesses seeking audience attention. This emphasis on creating, managing, and utilizing data falls under the buzz phrase “Big Data”—data sets too large for traditional computation, and the technologies, engineers, and statisticians who support it.

Television companies are also experimenting with leveraging Big Data as a way to understand audiences and manage risk in a data-rich digital media landscape. One of the reasons the television industry wants to understand its audiences better is because it is increasingly difficult to dictate when and where audiences will watch (Kastelein, 2013). The possibilities opened up by a digital media environment for businesses and audiences alike directly trouble the industry’s ability to dictate clearly windows of distribution. This is a problem for an industry that has historically been understood as a “fundamentally scarce service” (Sterne, 1999, p. 506).

On one hand, television companies fight against the problems present in this type of environment through hard coding digital rights management into web browser screens and utilize copyright laws to slow down illegal content uses (Moody, 2013).1 On the other hand, some television companies also actively seek to understand networked digital environments in terms of the information about audiences that it can provide. Television companies hope to leverage this knowledge of audience behaviors in various ways to tame problems of attention. A key way this happens is through how television companies use social media for audience information.

This article explores how data are utilized in the tentative partnerships created between television and social media companies. Specially, I look to the mutually beneficial relationship formed between television companies and Twitter—a social media platform that allows users to send either single photos, 6 s or less video clips via their Vine app, or 140 character bursts of communication known as tweets. Investigating the Twitter–television relationship is not about any single television program as Twitter’s presence in the social media efforts of virtually all television shows is ubiquitous. Nor is this article about how television companies might use Twitter to directly engage audience attention. Rather, it is about how Twitter shares complex analyses of user behavior with television companies. They call this work the “TV Genome,” and Twitter does this by creating algorithms that connect tweets to television content with very few context clues. All the user has to do is use Twitter and the work happens in the background.

In this light, I want to call attention to how audience data are utilized as a way for the television industry to map itself onto the everyday lives of contemporary audiences. An emphasis on everyday life is important because it is at the level of the everyday, as understood by Henri Lefebvre, where the materialization of attention and monetization takes place. In addition, everyday life, as Rita Felski (1999) argues, is understood as the space and time where we become “acclimatized to assumptions, behaviors and practices” (p. 31). While our habitual practices may be processes that we do not think about too much, I guarantee you that the television industry charts them in great detail.

The television industry has historically been more or less interested in consumer data gleaned from viewer rating systems and focus groups. What is different about data collected from social media platforms is that it is “wild,” collected from everyday utterances and not within a structured research scenario (C. Ang, Bobrowicz, Schiano, & Nardi, 2013). This “wild data” allows television companies access to a broader spectrum of consumer information in an open-ended format that allows for more careful tracking of changes in consumer sentiment over time.

Thus, I want to argue that the data-intensive monitoring of everyday life offers some measure of soft control over audiences in a digital media landscape. Borrowing from the work of Gilles Deleuze (1992), Henri Lefebvre (1984), James Beniger (1986), and Tiziana Terranova (2004), I use the phrase soft control to define the purposeful actions of the television industry to shape audience attention toward predetermined goals. These interactions between television companies and audiences develop autonomously over time while often being interjected with prompts and run through different iterations. The data collection that happens on Twitter can be seen as a soft control practice that works in the background to funnel information about audiences to television companies. This happens because Twitter gives television companies in-depth access to unstructured utterances from the everyday life of the Twitter user. The data that are produced in this instance occur because people are simply using Twitter the way it is supposed to be used. Twitter’s algorithmic scripts have data to parse because, well, we do the work. They are there to track, atomize, and tabulate us. By looking specifically at the role Twitter plays in harnessing knowledge about audience behaviors and practices, I will show how algorithmic scripts are put into action for the television industry and how the logic of soft control helps strengthen various television companies’ position in a digital media environment.

This is not to suggest that television companies can determine audiences in the strictest sense—that would clearly be false. Any investigation into “Social TV”—the relationships created between social media technologies and television—will quickly uncover that audience attention is not something that is easily taken for granted. In fact, this process of capturing eyeballs is never fully complete and always in flux.2 But this does not mean that television companies are not finding uses of social media to manage the terms of the industry–audience relationship. The amount of data we pump into this social feedback loop is astounding and points to the unequal dynamic between producers and audiences. Thus, even in a media environment that ostensibly gives more control to the audience, an uneven playing field exists.

Television, Social Media, and Data

Before moving on to explaining Twitter and the TV Genome, I want to briefly situate this discussion against the burgeoning importance of social media to the television industry in terms of direct engagement and, more importantly, data collection. And there is a lot of chatter these days about the relationship of social and traditional media. Twitter is the first thing star reporter/TV-host Anderson Cooper reportedly checks when he wakes up (Anderson, 2011). Advertising Age sees the integration of social media into traditional media as producing a new type of hybrid media (Rubel, 2011). Gail Becker, global head of Edelman’s Digital Media, who counsels the National Association of Broadcasters, says that we have “officially entered into an era of social entertainment” when people are beginning to expect to interact with the entertainment they consume (Value, Engagement, 2011). Relatedly, in an Edelman survey of media consumers, 57% of “general consumers 18-54 in the United States” consider social networking as a form of entertainment, the number jumping to 70% among 18- to 29-year-olds (Value, Engagements, 2011).

Social media are, perhaps obviously, built around the concept of sharing, which is materialized in the ubiquity of “share” buttons across the Internet. These buttons allow you to share content across a variety of platforms and socially enabled sites. This web application programming interface (API)-enabled form of communication is hybrid from top to bottom. While some platforms like Facebook started ostensibly as a way to allow friends to communicate, consumer- and media-oriented companies have flooded this space as a way to reach out to their customers wherever they are. This mixture of consumer, personal, and interpersonal messages is precisely what these companies are after—“a superior alignment of commercial, consumer, and wider public interests” (Spurgeon, 2008, p. 113).

It is clear that there is a burgeoning connection between television and social media. According toLost Remote, a blog about Social TV, “Facebook is [now] a huge distribution and promotional platform for TV shows” (Bergman, 2012). As of May of 2011, 275 million Facebook users had liked television shows 1.65 billion times, and television shows are always well represented during the evenings on Twitter’s top 10 trending words (Bergman, 2012). In hoping to reproduce the so-called “water cooler effect” through social media, television companies believe they have to remap their products onto the times and spaces of contemporary consumers (Stelter, 2011).

Television companies use social media platforms like Facebook and Twitter for what we could call direct audience engagement. These instances can be mundane, like when Nina Dobrev, an actress formerly of the CW’s The Vampire Diaries, tweets out to fans:Feeling soooooo much love from the Teens!!! Thank you for nominating me—you guys are amazing! <3 you!!!! @teenchoicenews

This tweet is retweeted by The Vampire Diaries’ official Twitter account, creating more circulation. A “perhaps-fan” (we really do not know about much about “her”—if she is a bot, corporate account, etc.) like @dobrevselenas can respond to the tweet with “@ninadobrev you deserve it baby, i love you more <3.”

There are also award-winning (or at least nominated) uses of social media. The Shorty Awards gives annual recognition to the best uses of social media by television companies. TNT’s Legendswas nominated in 2015 for best use of a Twitter hashtag with #DontKillSeanBean, capitalizing on the popularity of the pop culture meme that questions why Sean Bean, the main actor in Legends, dies in everything. BBC America’s Orphan Black was also nominated in 2015 for its use of social media—Instagram, Tumblr, Twitter—to create content for the #CloneClub, a group of passionate fans labeled after the show’s central theme of genetic cloning (“Best in Television,” 2015). HBO was nominated in 2014 for its work with 360i, a digital marketing agency, on Game of Thrones to create #ROASTJOFFREY, a 48-hr, crowdsourced social media comedy roast of King Joffrey, a reviled character on the show (“Best Use of Social Media for Television,” 2014).

Although these examples are worthy of investigation in their own right, for the purposes of this article, an important way to understand the burgeoning relationship between social media and the television industry is through the types of partnerships that occur in the process of building and sharing data sets on potential audiences.

Television’s contemporary interest in mining social media platforms for user data parallels their interest in direct audience engagement through social media campaigns, and the two are often intertwined. Television companies look to partner with data-rich companies like Google, Facebook, Twitter, Acxiom, and BlueKai that offer more data on customers than the television industry could collect alone.3 These are what Joseph Turow (2006) would call “permission-based” databases that collect data to analyze consumer behavior (p. 88).

There are many reasons—including improving programming and digital ad targeting—behind this push into Big Data. For instance, all the major networks—ABC, CBS, and NBC—have set up their own analytics companies that mix first- and third-party data as a way to handle the flood of data from social media and traditional data sources (Thielman, 2015a, 2015b). Black Entertainment Television (BET) partnered with Adobe Social to create a social listening campaign to shift how it marketed the hit show Being Mary Jane (Enright-Schulz, 2014). Similarly, HBO partnered with Arktan SocialTrends and Facebook to mine user data as a way to shape promotion for the final season of True Blood (Aggarwal, 2014). In addition, Time Warner Cable utilizes user data to “target customers with the same advertising campaign simultaneously in cable television, mobile devices, the web, social media advertising, and other platforms” (Ungerleider, 2013). In doing so, Time Warner creates data profiles by combining viewing habits with information that data management companies collect such as voter registration records and real estate records. All these data create a very specific profile of who a user is or could potentially be. This personalization can then be fed back into the many interconnected platforms and technologies, creating slight differences in approach for each individual.

Let us turn now to Twitter for a more in-depth example of how data are used to shape the industry–audience relationship before explaining how we can view this as a form of soft control.

Twitter and the TV Genome

On June 4, 2013, @bobbychiu wrote “My expression after watching Game of Thrones this week . . . Omg” which was paired with a picture of a rabbit emerging from a hole, ears perked, face wide with terror.

The most interesting thing about this tweet is that it has no direct connection to the HBO showGame of Thrones. This user is not trying to talk back to Game of Thrones; he or she is merely sharing his opinion with a range of his followers and using Twitter the way it is supposed to be used. Thus, this individual is just tweeting about what is happening in his or her everyday life. It just happens that this expression is hosted on a platform that catalogs this sentiment and algorithmically aggregates it with the other tweets connected to Game of Thrones.

As one person, @bobbychiu’s comments are fun but once aggregated have the potential to paint a picture of broader consumer sentiment. It makes sense that television companies would want access to Twitter’s data as evidence suggests that Twitter usage portrays a different picture of the popularity of shows than traditional Nielsen ratings do (Amol & Vranica, 2013). In addition, Twitter has been leveraging its relationship with television in an attempt to monetize the platform. As Sarah Perez of TechCrunch says, Twitter is “betting big on being the TV companion app” (Perez, 2013).

To better understand how these data are aggregated, let us look at Bluefin Labs, one of many companies (Radian6, General Sentiment, Sysomos, Converseon, and Trendrr) in the growing field of Social TV analytics. According to Bluefin Labs, their clients include 40 of the largest TV networks in the US (Conyers, 2012). Founded in 2008, Bluefin was purchased by Twitter in February of 2013 to shore up a relationship to television companies (MacMillan, 2013).

Bluefin calls its work in Social TV analytics the “TV Genome.” This “genome” is created by cross-referencing comments made on Twitter with program guide information, the names of characters and actors, closed-captioning text, demographic information about who is commenting, along with an advertising schedule that Bluefin created. The “genome” works in two tiers, one tied into those watching within a 3-hr window of the show and another focusing on the 90 days after a show’s premiere to catch time-shifted viewing.4

This is a data-driven approach that uses tweets as data points in conjunction with the unstructured data of television video feeds. To create something like a “graph” of television, Bluefin records linear television streams and turns these feeds into data. Data are not just something that exist. Everything in our world only has the potential to become datum and only does so when it is translated into a “unit or morsel” of information (Gitelman & Jackson, 2013, p. 1). To put it another way, this video is just a feed full of potential data until it translated into mapped data. Bluefin has experience in trying to map video data; Michael Fleischman, the chief technology officer of Bluefin, actually worked to help machines recognize home runs by watching Boston Red Sox game broadcasts (Graham-Rowe, 2007).

When a video feed is translated into data, it would first be broken down into still images, or frames, and then stored into something like an Apache HBase—a Big Data file system where you can store images. The images are stored as raster images (as pixels with discrete number values for color) or vector images (color-annotated polygons). The images are then broken down into particular features based on pixel placement, luminance, color, patterns of pixel movement (including camera movement), and so on. Once the video image is broken down into discrete units, it is available as data. An algorithm can then be taught to recognize what is happening on the screen based on how it cross-references pixel or polygon movement and audio wavelengths as visual data are connected with automatic speech recognition systems to improve accuracy. Thus, for instance, Twitter’s algorithms know when King Joffrey from Game of Thrones is doing something sadistic or when Detective Linden from AMC’s The Killing is having a bad day. This ability to break the image down into readable data allows every channel that Bluefin monitors to be precisely mapped.

The semantic analysis that Bluefin does for specific channels–including shows, advertisements, interstitials–is made meaningful for television companies when combined with an analysis of tweets. This allows Bluefin to have a very specific knowledge of when conversations about a television program are occurring. To more precisely catalog what the folks on Twitter think of particular shows, Bluefin breaks the language of tweets down into categories that are more sophisticated than good or bad—“vulgar or polite, serious or amused, calm or excited” (Talbot, 2011). Thus, Bluefin utliziies deep machine learning algorthims that are utlizied to give order and meaning to comments pulled from social media.

While Twitter also uses Bluefin to target advertisements at particular tweeters, the use of tweets to create a map of human behavior is another animal altogether (Lunden, 2013). This map ingests data from a variety of places we can be in our everyday lives—at work, home, at a friend’s house, or a bar. As long as we have a smartphone and a Twitter app (or on another app running Twitter’s API), we can do the things we may normally do: respond to a tweet by a friend, comment about the role of nudity in HBO programming, proclaim excitement over the trajectory of a particular character arc, or any other example from the vast array of choices that constitute our everyday lives. Translating the playing we do on Twitter into data, Bluefin atomizes our tweets and reassembles them into larger identified data trends that look something like water cooler “talk.” In this way, the social media we may create—in this instance, a tweet—becomes the data for television companies to better understand how we feel about television.

How are the things that occur in the Twitter-TV partnership any different from a longer history of companies trying to understand their audiences? According to James Beniger (1986), the technologies “for collecting and processing all these types of information” appear in the late 1910s and develop through the 1930s (p. 378). Yet this was still a relatively unsophisticated process at the time, as Karen Buzzard (2012) notes,Prior to the 1930s, knowledge of media audiences consisted primarily of subjective impressions such as anecdotes, postcards mailed in by the audiences, and other schemes conceived by advertisers. (p. 2)

Since that time, audience research has grown into a robust industry supporting a range of other media industries as they tried to control the “reciprocal flow of information from the mass audience back to the media writers and programmers” with the goal of closing the gap between ideal behavior (what they wanted) and real behavior (what actually happened; Beniger, 1986, p. 276). By the 1960s,the expansion of the research community also made the social scientist a common figure in marketing circles, and introduced social science terminology into marketing and advertising jargon. The result was a pressure to generate more detailed and deeper descriptions of consumer behavior. (Arvidsson, 2011, p. 277)

In relation to television, by the end of the 1960s, Nielsen Media Reach became a monopoly provider for audience information to the television industry. Nielsen’s methods have, over time, become more sophisticated, moving from diary usage where a statistically meaningful range of individuals self-reported behavior to, since 1984, the use of the People Meter to technologically track viewing habits (Buzzard, 2012).

The point of this brief historical tour is to situate practices of trying to understand audiences with a longer tradition. As Mark Andrejevic (2009) says, contemporary efforts “to track the behavior of viewers can trace their lineage back to the efforts of early audience rating researchers to find a two-way channel for monitoring the audience” (pp. 33-34). This means that the technological methods offered by social media platforms are “amplified or supercharged” versions of audience research and not completely different (Deuze, 2009, p. 144). Thus, what makes social media compelling for media companies is the sheer amount of potentially usable data that these data-rich environments foster. Surveys and diaries are a voluntary form of engagement, whereas social media data are often referred to as “wild.” This means it is not based on surveys, diaries, or viewing logs collected from a small sample of TV viewers but rather these data simply exist as a function of the way the Internet and social media technologies work. For instance, we simply have to log on to Facebook, click through some links, and chat with some friends, or post our opinion about a TV show on Twitter. This is a key feature to how soft control works now; we simply have to live and let the background scripts do their algorithmic parsing.

Soft Control and Everyday Life

How can Twitter’s TV Genome be seen as a form of soft control in the dynamics created in industry–audience relationships? To explain this, I need to fully define soft control. By all means, control is a scary word. For me, it conjures up old fears of subliminal advertising, that is, this message made me do that specific thing, but I am unaware of the cause of my behavior.5 Although this looming, sinister form of control is a very popular usage of the term, it is but one of the many ways control has been imagined academically.

In The Control Revolution, a sweeping rereading of Earth’s history through the framework of informational control, James Beniger (1986) offers a good starting point to understand how the term can be used. Beniger explains control using a range of definitions from determination to influence. He refers to these as existing on a continuum between stronger and softer forms of control. The only thing that encapsulates these different definitions is the notion of a “predetermined goal.” Thus, to Beniger (1986), “all control is thus programmed” (p. 40).

Focusing on the role technology and the economy play, he looks at how public institutions dealt with social control in a 19th- and 20th-century industrial world increasing in size and speed. Beniger (1986) points to the concepts of information and feedback as central to understanding control. This means that since the 1840s, long before the “Information Age,” social organizations needed to be able to utilize information as quickly and efficiently as they did material energy resources. Simply, if the world was moving faster, more information was needed to shape the direction of that world.

Looking at the rise of the advertising industry as a “nascent infrastructure for control of consumption,” Beniger (1986) points to the long list of innovations in the industry—newspaper distribution numbers, coupon reinforcement, the scientific methods of audience investigation by advertisers like Claude Hopkins—as proof of this desire for control. This information was used as a feedback technology to gain a better understanding of the audience and how to reach it.

Beniger clearly is indebted to the work of cybernetics and information theory as seen through his citations of luminaries like Claude Shannon and Norbert Wiener, two founding figures in the study of information. Wiener (1948) defines cybernetics as the science of control and communication across a range of biological and man-made machines.6 The focus of cybernetics, as W. Ross Ashby (1957) says in An Introduction to Cybernetics, was to understand the behavior of machines as far as they were “regular, or determinate, or reproducible” (p. 1). This study of behavior wants to identify the range of possibilities of action as a way to chart and predict results in complex systems. In these complex systems, the concept of difference—or the change from one state to the next—was important, because it offered a way to plot, predict, and program change mathematically.

Henri Lefebvre analyzes the relationship of feedback technologies to control in the way cybernetic systems are utilized to shape human communication in a “bureaucratic society of controlled consumption.” The idea of bureaucratically controlled consumption is most clearly articulated in Lefebvre’s Everyday Life in the Modern World. He argues that society and its various “sub-systems” are functionally organized and rationalized, and produced and reproduced though programming, obsolescence, and management via cybernetic systems. It is our everyday lives where this control, in the form of programming, takes place (Lefebvre, 1984). An important take-away from this is thinking about how cybernetic systems have the potential to produce as much control as they do freedom. This view of societal control exists in the intense amount of data that cybernetic systems collect and that human statisticians and computer-based algorithms interpret, which has become a central force in how media industries understand and shape their relationship to audiences.

To Gilles Deleuze, the networked nature of audiences and contemporary media does not work against a society of control, but rather is a chief feature of a control society. In his short work, “Postscript on the Societies of Control,” Deleuze discusses the transition from a disciplinary society built on enclosure—as you move from the hospital, to the factory, to the school—to a control society built on more open-ended forms of continuous control where technology interlinks all these previously separate domains. In this formulation, the modes of a control society are not unyielding but flexible—something Lev Manovich (2002) would call “modular” (p. 28). These systems of control are at the crux of a contradiction where freedom of spatial movement is paired with constant monitoring, auditing, and adjustment. As Gilles Deleuze (1992) says, a society of control is one where the “controls are a modulation” (p. 3). In other words, this logic of control is flexible, reconfigurable, and fast. “Control is about the constant subtle structuring of social life, the ways that we are sorted, tracked, cajoled, and tempted” (Wise, 2011, p. 162).

With all these data—credit scores, faces, passports, driver’s licenses, search patterns, website visited—we have perhaps transitioned from the individuals of the disciplinary societies to the “dividuals” of the control society (Deleuze, 1992, p. 5). To Deleuze, “dividuals” are not us, per se; they “can be seen as those data that are aggregated to form unified subjects” (Cheney-Lippold, 2011, p. 169). In this way, there is a constant feedback loop between how we imagine ourselves and the categories created for us by all these data. John Cheney-Lippold (2011) sees the “digital construction of categories of identity” as a new axis of soft control where “control can be enacted through a series of guiding, determining, and persuasive mechanisms” (p. 169).

Applying Tiziana Terranova’s (2004) ideas about soft control is also instructive here because how television companies use social media to collect and map audience data acts like the open environment “biological computing” systems she identifies in Network Culture.7 To Terranova, these environments are characterized by potentially enormous productivity; their difficulty to control; “nonlinear interactions, feedback loops, and mutations”; and a central designer that wishes to produce an “emergent behavior” out of other actors (pp. 104-105).

For Terranova (2004), “biological computing models” have a moment of construction, then a positioning of constraints, and, finally, the “moment of emergence of a useful or pleasing form” (p. 118). Control is implemented in the beginning (founding the model) and the end (the survival of the most “useful or pleasing variations”) (Terranova, 2004, p. 118). If no “pleasing variations are found, then the model is fine-tuned” and sent through another modeling run (Terranova, 2004, p. 119).

Imagine biological computing on a much larger scale and you get close to conceptualizing what television companies are doing. Twitter’s TV Genome allows looks like an open-ended biological computing model. For instance, let us say Twitter analyzes the responses to an episode of The New Girl. The data sets are theoretically open as tweets only trigger algorithms when certain parameters are met. The actors here—the tweeters—are given relative autonomy. They are not told their data are being used and whether Twitter “ingests” this tweet or not does not affect the tweet. This ingested tweet is paired with a range of other tweets to create data for the Fox Network to view via Bluefin’s Signals App—an application that aggregates and displays the information from the TV Genome specific to that client. And like a biological model, it is open to change depending on when, where, and what people tweet. Thus, it is open ended; the TV Genome is always transforming. This mode of open-ended control works because it does not “require an absolute and total knowledge of all the states of each single component of the system” (Terranova, 2004, p. 119). As more people tweet and as Twitter tweaks its algorithms, the analysis of these tweets has the potential to gain in complexity.

According to Madeline Akrich (1992), “a large part of the work of innovators is that of inscribing this vision of (or prediction about) the world in the technical content of a new object” (p. 208, emphasis added). In this way, designers attempt to “define a framework” for how audiences will engage a particular object. Of course, no audience may come forth or they may do something radically different from the intended use, as was the case in how photoelectric light kits made in France were actually used in ancillary markets in Africa (Akrich, 1992). The failure of a particular object to be “correct” for an audience does not signify the end of the road per se. Rather, the experience translates into data to be fed back into the design process where designers redefine “actors with specific tastes, competences, motives, aspirations, political prejudices, and the rest” (Akrich, 1992, p. 208).

In the process of inscribing an object with particular uses, designers “script” potential outcomes. If we focus on the design of an object like a short promotional video, then we get a certain orientation to what design can be. We may think about lighting, actors, editing, rhetorical appeals present via dialogue, and so on. Talking about designer scripts that work in the background forces us to take a different approach. When I talk about background designer scripts in the Twitter–television relationship, I am referring to how user data are ingested, mapped, reduced, and made actionable through various audience retention strategies. Thus, in a sense, these scripts, that is, algorithms, lay dormant, waiting for users to do the work. Background design scripts may best be seen as proto-design, running as a parallel process to the creation of design objects, services, and environments.

The data that algorithms behind the TV Genome collect are used to shape the direction of television companies as they come to grips with an environment that works against dictating windows of distribution. The background algorithms are used to create the structured tracking, personalization, and responsiveness that are becoming an important part of how television relates to digital media environments. Furthermore, the collection of data allows television a richer picture of the rhythms, places, and practices of the everyday lives of audience members. That these data are dynamic offers the potential for an even more complex rendering of our everyday lives as companies “measure” and “adjust” their relationship to audiences. Thus, the moment data are mapped, reduced, and fed back into audience retention strategies is when we can see a logic soft control.

Conclusion

I have been looking at the role of data as a background script in designing soft control into the relationship of television to the everyday lives of digitally enabled audiences. On one hand, it is frustrating that we cannot know more; no company is going to share its proprietary algorithms or tell you specifically how data become actionable. On the other hand, there are still some important lessons to glean from an acknowledgment these background scripts are in place.

First, the feedback loop between tweets, the TV Genome, and television industry practices is significantly sustained through the role algorithms play in shaping our everyday life. While I have been looking at the role algorithms play in extending the scope of industrial data collection, according to Tarleton Gillespie (2012), “algorithms play an increasingly important role in selecting what information is considered most relevant to us, a crucial feature of our participation in public life” (p. 1). Thus, we have algorithms that read tweets as data on one end, and algorithms that suggest search engine results on Google on the other end.

The important point to note here is how these algorithms work in our everyday life not by hailing us—calling for any particular mode of address—but my working in the background to collect “wild” data from our experiences of being alive and communicating. Eli Parsier (2012), in The Filter Bubble, says we tell ourselves a reassuring story. In a broadcast society, editors largely had control of the flow of information. They could tell us when, where, and how. The Internet ostensibly swept these structures away. But rather than the baton of control being passed to us, it has been passed from industrial gatekeepers to algorithmic ones.

Second, I want to point out that the amount of data collected by a company like Twitter (and, remember, they are but one player in this larger field of data collection) helps identify logics of soft control by reinforcing an asymmetry of information between those who hold data and those who do not. The “information flow” goes from our social media work to their databases and not the other way around (Solove, 2004, p. 162).

In the amount of knowable things circulating today—the texts of our everyday lives, or at least the information traces these texts leave behind—we can start to see the way Deleuze’s society of control might work. As entities like the television industry create more sophisticated portraits of whom we are, they arguably gain the upper hand in patterning their practices toward our digital habits. As a nod to Pierre Bourdieu, these corporations know my habitus perhaps better than I do.

Article Notes

  • Declaration of Conflicting Interests The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: I, the sole author, agree to this submission and this article is not currently being considered for publication by any other print or electronic journal.

  • Funding The author(s) received no financial support for the research and/or authorship of this article.

Notes

  • 1. For work on intellectual property, see Lawrence Lessig’s Free Culture (2004), Code: Version 2.0 (2006), Remix (2008); Adrian Johns’s (2011) Piracy; Ted Striphas’s (2009) The Late Age of Print; and Rosemary Coombe’s (1998) The Cultural Life of Intellectual Properties.

  • 2. See I. Ang’s (1991) Desperately Seeking the Audience. In this book, she points out that the audience as conceptualized by television companies is a discursive construct and that actual audiences are too polysemic to be completely articulated in a closed discursive structure.

  • 3. For more information on firms like Acxiom and BlueKai, see Joseph Turow’s (2013) The Daily You.

  • 4. Unfortunately, http://bluefinlabs.com/thesciencebehindit link is now dead since Bluefin was purchased by Twitter.

  • 5. See Stephen Fox (1977), The Mirror Makers. In this book on the history of creativity and research in the formation of the advertising industry, Fox retells the tale of market researcher James Vicary’s claim that he significantly raised sales of Coke and popcorn by slipping subliminal messages into movies. This claim turned out, surprisingly, not to be true. Also seeCharles Acland’s (2011) Swift Viewing, where he positions popular understandings of subliminal influence as a way of coming to grips with social change in consumer society, and later in the information age.

  • 6. Note that cybernetics’ view of machinery is much broader than its popular usage. A machine is anything that can interact and change whether metal or biological.

  • 7. In addition to scholars who engage with the issue of control mentioned above, see Kevin Kelly’s (1995) Out of Control: The New Biology of Machines, Richard Thaler and Cass Sunstein’s (2008) Nudge, and Norman Klein’s (2004) The Vatican to Vegas: A History of Special Effects.

This article is distributed under the terms of the Creative Commons Attribution 3.0 License (http://www.creativecommons.org/licenses/by/3.0/) which permits any use, reproduction and distribution of the work without further permission provided the original work is attributed as specified on the SAGE and Open Access page (https://us.sagepub.com/en-us/nam/open-access-at-sage).

Author Biography

Michael Lahey is an assistant professor in the Digital Writing and Media Arts department at Kennesaw State University. He studies the relationship between digital design, content, and audience interaction.

References