Posts Tagged ‘MIT Press’
Review of The Constitution of Algorithms
Title: The Constitution of Algorithms
Author: Florian Jaton
Publisher: MIT Press
Copyright: 2020
ISBN13: 978-0-262-54214-2
Length: 381
Price: $60.00
Disclosure: I received a promotional copy of this book.
There is a vast literature on the process of writing efficient computer programs, but relatively little has been written about the human processes in which those programs are created. In The Constitution of Algorithms, ethnographer Florian Jaton documents his active participation in multi-year project at a Swiss image processing lab to prepare the ground for further research into the human elements of computer programming.
Preparing the Ground
Algorithms, which Jaton loosely defines as computerized methods of calculation, form the backbone of computer programming. These recipes, when properly developed and tested in the image processing context, yield reliable results that compare favorably with human judgment. He breaks the algorithm generation process into three parts: ground-truthing, programming, and formulating.
Ground-truthing is the process of establishing a data set with known correct characteristics. In Jaton’s case, because he joined a group developing face identification (as opposed to facial recognition) technologies, that meant hiring thousands of individuals through Amazon Mechanical Turk to look at a collection of photos and identify the regions, if any, within each image that contained a human face. The team reviewed these evaluations and discarded those that were incorrect. From that base, team members (including Jaton) could engage in programming to create algorithms to identify faces in the photos, which could be compared to the ground truth arrived at earlier. The final section, on formulation, looks at the mathematical underpinnings of these computational techniques. In a real sense the math is the most fundamental aspect of the project, but it wouldn’t make sense to present it earlier because the intended audience of ethnographers wouldn’t have the necessary context to evaluate that information until ground-truthing and programming were described.
The ground-truthing part of machine learning is particularly interesting…one goal of recognition-driven image processing is to identify meaningful, or salient, aspects of a collection of pixels that an algorithm can use to return a true or false value (face or not a face). Salience is tricky – one promising algorithm that distinguished cats from dogs turned out to have been trained on an image set where most of the cats had a collar with a tag and the dogs did not. The algorithm latched onto those tags and, while that criterion worked well for the training set, it failed when applied to other images. I’m also glad that Jaton called out the human effort required to tag thousands of images or perform similar tasks, which is one of the hidden secrets of many machine learning efforts.
Programming as a (Socio)Logical Process
When describing the programming process using a formal system, the author turns to sociotechnical graphs (STGs), which assign a letter to a specific task in a process and track how the tasks enter, move within, and potentially exit a technical process. The author notes that STGs have fallen by the wayside for this type of analysis, and I can see why. While it might be relatively easy for an analyst deeply embedded in a process to keep track of which letter corresponds to which task, doing so will strain a reader’s working memory and make interpreting the STG difficult. I’m not a sociologist and don’t have a recommendation for an alternative system, but I found the STGs hard to read.
What I did enjoy were the Jaton’s interactions with other members of the lab’s team while he developed and corrected an algorithm to generate rectangles that contained faces identified by workers in the Amazon Mechanical Turk program. The common myth of the lonely programmer fueled by caffeine and spite is, thankfully, mostly fiction. Effective programmers seek out advice and assistance, which the author’s colleagues were happy to provide. The lab director took on an outsider with limited coding skills, but Jaton’s willingness and apparent ability to make beneficial technical contributions surely led to friendly and productive interactions.
Conclusion
The Constitution of Algorithms is adapted from Jaton’s doctoral dissertation, which he admits in the foreword was “cumbersome.” There are a few uncommon phrasings and word substitution errors that made it by the editors, but overall Jaton and his MIT Press colleagues did an excellent job of transforming a specialized academic text into a book intended for a broader audience. I believe The Constitution of Algorithms will be useful for sociologists in general, ethnographers in particular, and other analysts who could benefit from a formal approach to the analysis of software development.
Curtis Frye is the editor of Technology and Society Book Reviews. He is the author of more than 30 books, including more than 20 books for Microsoft Press and O’Reilly Media. He has also created more than 80 online training courses for LinkedIn Learning. He received his undergraduate degree in political science from Syracuse University and his MBA from the University of Illinois. In addition to his writing, Curt is a keynote speaker and entertainer. You can find more information about him at http://www.curtisfrye.com and follow him as @curtisfrye on Twitter.
Review: Experiencing the Impossible
Kuhn, Gustav. Experiencing the Impossible. MIT Press. 2019. 296 pp. ISBN: 978-0-262-03946-8
Author note: I had a presentation proposal accepted for the 2019 Science of Magic Association conference, for which author Gustav Kuhn serves as a committee member. The committee made its decision before I wrote this review.
Experiencing the Impossible: The Science of Magic by Gustav Kuhn, explores the burgeoning field of scientific analysis of magic and its performance. Kuhn is a Reader (rank above Senior Lecturer but below Professor) in Psychology at Goldsmiths, University of London and a member of The Magic Circle.
The science of magic is a relatively new field, but it’s one that lends itself to several different types of research. One way to examine how individuals react to (and, more importantly, interact with) magic is to ask their opinions about what they just saw. In one study, participants were shown a video of a magician making a helicopter disappear and were then asked whether they wanted to see a video showing another trick or to one explaining how the trick was done.
You might be surprised to know that only 40% of the participants said they wanted to know how the trick was done. I personally take that result as a good sign…it means that if a typical person watches a routine on video with no connection to the performer, they will want an explanation less than half the time. If a performer can create an emotional bond with their audience, I believe that percentage will move even more in the performer’s favor.
Kuhn also points to arguments challenging whether audiences believe what they’re seeing is real. In his discussion, he quotes Bucknell University instructor Jason Leddington as arguing that “the audience should actively disbelieve that what they are apparently witnessing is possible.” A magical experience, then, only occurs when it appears that a law of nature is being violated. Similarly, Darwin Ortiz notes in Strong Magic that there is a struggle between our “intellectual belief” and “emotional belief”. We know that what we’re seeing isn’t real, but we want it to be so.
Throughout the rest of Experiencing the Impossible, Kuhn relates other aspects of the scientific examination of stage magic, with chapters discussing the role of processes including memory, visual perception, and the use of heuristics to reason about what you’re seeing. The latter topic draws on Daniel Kahneman’s description of System 1 and System 2 thinking from his book Thinking Fast and Slow. System 1 is the slower, logical, and more careful system where one considers available evidence and comes to a reasoned conclusion. System 2 is much faster, relies on shortcuts, and is easier to fool. The reason many of us lean on System 2 more than we should is that it is less effortful than thinking in depth.
Experiencing the Impossible is an excellent book that captures the state of research in a field of personal interest to me as both a performer and a fan of science. Kuhn’s choice of topics provides and outstanding basis for an initial foray into the science of magic and offers a solid platform for future research. Highly recommended.
Review of Tap: Unlocking the Mobile Economy
Title: Tap: Unlocking the Mobile Economy
Author: Anindya Ghose
Publisher: MIT Press
Copyright: 2017
ISBN13: 978-0-262-03627-6
Length: 240
Price: $29.95
Rating: 100%
I purchased a copy of this book for personal use.
I’m not a reviewer who gives out perfect scores like candy. In fact, I chose to use a 0-to-100% scale so I could provide nuanced ratings. I happily gave Malka Older’s debut novel Infomocracy a 98% because it was outstanding work but, for whatever reason, didn’t ring the bell for 100%. I believe I’ve given one other book, Intellectual Property Strategy (from the MIT Press Essential Knowledge series) a maximum rating. Tap, by Anindya Ghose and also from MIT Press, is the second.
The Mobile Landscape
Mobile devices are everywhere, with their spread continuing to gather pace as the prices of the devices and supporting services come down. Originally limited to voice and Short Message Service (SMS) communication due to a lack of bandwidth, smartphones now enable subscribers to make voice and video calls, search the web, and, of critical importance to marketers, engage in commerce. In Tap, Anindya Ghose of the Stern School of Business at New York University relates the results and implications of numerous academic studies of mobile commerce. The results provide a robust framework for marketers working in the mobile arena.
In his introduction, Ghose identifies four contradictions in what consumers want from mobile marketing and how we behave:
- People seek spontaneity, but they are predictable and they value certainty.
- People find advertising annoying, but they fear missing out.
- People want choice and freedom, but they get overwhelmed.
- People protect their privacy, but they increasingly use their personal data as currency. (p. 9)
Success in the mobile arena requires marketers to strike the proper balance among these four tensions.
Studies and References
After reading the first few chapters of Tap, I realized how many studies of mobile commerce have been conducted over the past ten years. As the author points out, tracking user movement and behavior, combined with the ability to test various forms of advertisements depending on context, provides a target-rich environment for academics and industry marketers to experiment. Ghose, who is a lead or co-author on many of the studies he cites, provides useful background on mobile commerce before dividing his coverage of the major forces of mobile marketing into nine chapters:
- Context
- Location
- Time
- Saliency
- Crowdedness
- Trajectory
- Social Dynamics
- Weather
- Tech Mix
Each chapter reviews the literature relating to its force and offers insights into how marketers can use those results to the benefit of their clients and consumers. It’s impossible to cover all of the forces in any detail, but I found the discussion of crowdedness and trajectory particularly interesting. Crowdedness, as the word implies, refers to crowded conditions typically found while commuting. On a subway or bus, commuters typically pay attention to their mobile devices, ear buds in, and tune out their surroundings. Advertisers can take advantage of this focused attention by distributing relevant and interesting advertisements (and advertorials) during those periods.
Trajectory refers to a consumer’s path, either as movement between two major objectives (home and office) or within a larger location (movement within a store). When outside, mobile phones can track user movements based on GPS and accelerometer readings. When inside, the same tracking can be done using wi-fi signals. Each individual’s tendency for future movement based on their current vector can be exploited by marketers to make attractive offers.
The other seven chapters provide similar coverage. In addition to crowdedness and trajectory, I found the chapter on location (Chapter 5) to be particularly interesting.
Conclusions
Marketing is not a one-way street. Consumers are bombarded with ads and advertorial content, raising the mental cost of search and time (and data) spent waiting for ads to load on small-screen mobile devices. Many users employ ad blockers to reduced as much of the clutter as they can, greatly speeding up their usage experience but depriving them of potentially useful information. Also, as Ghose points out in the fourth contradiction listed above, consumers increasingly use their personal data as currency and don’t hesitate to refuse a trade if they feel they’re not receiving sufficient value in return.
Ghose is a leading expert on mobile marketing. His new book Tap summarizes the field’s most important research in a compact, readable package that I believe is indispensable for anyone interested in the subject.
Review of Gravity’s Kiss
Title: Gravity’s Kiss
Author: Harry Collins
Publisher: MIT Press
Copyright: 2017
ISBN13: 978-0-262-34003-8
Length: 416
Price: $29.95
I received a promotional copy of this book from the publisher.
Albert Einstein predicted gravitational waves as part of his theory of general relativity, with the caveat that the waves would be so weak they would be almost impossible to detect. Harry Collins, Distinguished Research Professor of Sociology and Director of the Centre for the Study of Knowledge, Expertise, and Science at Cardiff University, has closely observed gravitational wave science and its practitioners since 1972. In Gravity’s Kiss, he documents the first detection of gravitational waves and comments on the process from the complementary perspectives of sociology and physical science.
Years in the Making
Gravity’s Kiss starts by describing the initial mention of what turned out to be the first detection of gravitational waves. The Event, as it was soon known, occurred on September 14, 2015. Collins was at home, scanning through the subject lines of emails from the gravitational wave community, when he noticed a subject line mentioning an interesting occurrence during an engineering run of two new detectors. The devices were in Washington state and Texas, far enough apart that their readings could be compared, adjusted for the time to traverse the distance between the detectors, and examined for anomalies or glitches that could indicate an instrument fault or statistical coincidence that would invalidate the observation.
Collins’ method is to observe and report on science as it happens, so this message was his signal to more closely observe the process from his vantage as a trusted colleague with whom many practitioners willingly shared information. The author notes that, with one exception, he was the longest-tenured member of the gravitational wave community. He had observed years of work when everyone knew the odds of detection were vanishingly remote because their tools weren’t sensitive enough yet and been part of conversations when teams thought perhaps they had detected gravitational waves. (They hadn’t. The signal was a “blind injection” inserted by project managers to rehearse the procedures to be followed after a real detection.)
Secrets and Methods
Part of the ritual of science demands that experimenters maintain a measure of distance and detachment from their subject. As such, even knowledge of whether The Event came from real observations or had been injected into the data stream was kept secret from the researchers until it was time to “open the box” and determine whether the signal was real or the result of a glitch or blind injection. After each party to the analysis described their work, the team agreed all necessary due diligence was done, the seals on a few files were broken and the data compared to the signal. As it turned out, the signal was loud, clear, and free from mechanical glitches. Collins reports that the gravitational wave community celebrated the unveiling and turned almost immediately to the tasks of refining their analysis and writing the paper that would present their result to the world.
The paper, which everyone realized would be a landmark of the physics literature, brought the social side of science to the fore. Collins highlights two aspects of the paper writing and continuing analysis process that, in his opinion, hampered the community: secrecy and what he calls “relentless professionalism”. Not wanting to have their thunder stolen by scientists who were not part of the group, the consortium prohibited members from sharing anything about the detection with outsiders. While spouses and partners could be told, no one else was to know. This secrecy caused significant stresses within the group, particuarly as the analysis and writing process dragged on. Over the five months from the initial detection on September 14, 2015 to the press conference on February 11, 2016, the need to avoid disclosure strained relationships with colleagues and family even as bits of information leaked out. One rumor analyst was even able to piece together enough information from canceled conference attendance and similar tidbits to correctly predict the press conference’s date.
The process also suffered from “relentless professionalism”, where members asked increasingly fine-pointed questions regarding method, methodology, and results. The quest for statistical significance to claim a discovery, which in the physical sciences is measured by a severe five-sigma criterion, and the words used to describe a result take on deep meanings within the community. Collins describes the lengthy and occasionally fraught process with the eye of an experienced observer and with enough knowledge of the subject matter to comment on both the content of the paper and how it came to be. In practice, scientific endeavor is far from the detached process is often claims to be. Deciding whether to use the term “direct detection” in the paper’s title comes down to not wanting to hurt the feelings of previous researchers who, though not part of the consortium, are well-regarded and could lay claim to initial detection under certain interpretations of their work.
Conclusion
Collins’ contemporaenous narrative provides an enjoyable and relatable read. The first two-thirds of the book describe the process leading from initial detection to just after the paper was released, while the last third provides sociological context to flesh out his approach, observations, and recommendations. While he doesn’t shy away from wondering at the complexity of the detection apparatus and analytical techniques, his descriptions are delightfully free of hyperbole and treat the protagonists as good people doing the best they can to ensure their results are correct and share them appropriately. Gravity’s Kiss is the story of a monumental success brought about by a team of able researchers. Harry Collins was ideally positioned to relate the tale and made the most of his opportunity. Highly recommended.
Review of Driverless from MIT Press
Title: Driverless
Authors: Hod Lipson and Melba Kurman
Publisher: MIT Press
Copyright: 2016
ISBN13: 978-0-262-03522-4
Length: 328
Price: $29.95
Rating: 94%
I received a promotional copy of this book from the publisher.
Research and development of driverless cars has reached the popular press over the past few years, but until now attempts to frame the debate have remained in the specialty press and academic journals. In Driverless: Intelligent Cars and the Road Ahead, Hod Lipson and Melba Kurman offer a valuable perspective on the technological and policy implications of autonomous vehicles.
Seven Myths
The concept of the driverless car has been around almost as long as the automobile itself, but only in the past few years has the technology underpinning the concept advanced and evolved enough to bring it close to realization. Even so, there is enough disagreement and skepticism to slow the adoption of driverless cars.
Lipson and Kurman organize their narrative around what they call the Seven Delaying Myths that slow advances in driverless car networks:
- Autonomous driving technology will evolve out of today’s driver-assist technology
- Technological progress is linear
- The public is resistant
- Driverless cars require extensive investment in infrastructure
- Driverless cars represent an ethical dilemma
- Driverless cars need to have a nearly perfect driving record to be safe enough
- The adoption of driverless cars will be abrupt
I can’t address each point in depth here, but I’ll make a few notes. The second myth, that technological progress is linear, is clearly false. Elementary analyses of networks show that non-linear growth occurs as the number of interconnected members increases. Those connections drive innovation through aggressive idea sharing, competition, and cooperation. The staggering growth of internet technologies and platforms puts this myth to rest easily.
The fourth point, that driverless cars require extensive investment in infrastructure, was true under the completely impractical Electronic Highway paradigm promulgated in the late 1950s and early 1960s. Lane sensors and wires embedded in the road and sensors installed in the cars were prohibitively expensive and required far more computing power than was reasonably available at the time. By 2014, the U.S. government backed research into a paradigm called V2X, where cars exchanged data with other cars, the road, and roadside sensors. Even though the available technologies and processing power were exponentially better than what was available in the 1960s, 1970s, and 1980s, the V2X system used a top-down approach where the system, writ broadly, managed each car’s behavior.
One of the authors attended a 2014 U.S. Department of Transportation conference on autonomous vehicles and was astounded to see just a single session of the multi-day event devoted to Google’s self-driving cars and deep learning algorithms. Disagreeing with the DOT’s top-down approach (noted by including the phrase intelligent cars in the book’s subtitle), the authors believe that putting the smarts and sensors in the cars and using the highway’s infrastructure as a series of checkpoints and information relays is the superior solution. I find their argument persuasive. Advances in deep learning and agent-based models let individual vehicles build their skills, which they can combine with other vehicles’ experiences to develop an ever-improving ensemble model through a process the authors call fleet learning.
The Road Ahead
Driverless vehicles have started to appear on American roads, but significant objections remain. What Lipson and Kurman label as Myth #6, that driverless cars need to have a perfect driving record to be safe enough, poses two problems. The first is that it’s easy for critics to move the goal posts. Whatever safety level driverless cars have attained, it’s easy to use the specter of a runaway or hacked vehicle a passenger has no way to control to argue that the cars must be even safer. Second, humans are horrible drivers. According to World Health Organization figures updated in May 2014, 1.2 million people are killed in car accidents worldwide every year.
And yet, even though driverless cars offer the prospect of safer roads, the loss of privacy and autonomy weighs heavily in the balance. While Myth #3, that the public is resistant, is less true than it was, a significant proporation of Americans identify strongly with their car and see it as a way to maintain their freedom. Leaving the driving to a robot would deprive those individuals of an activity they cherish, which is an attitudinal barrier policy makers can’t ignore.
Conclusion
Driverless is an excellent book that offers a systematic and informative narrative on the history, state of the art, and future of driverless cars. Framing the issues through their Seven Myths offers a lens into the rhetoric supporting innovation and adoption of autonomous vehicles. There is much work to do on both the technological and policy sides—Lipson and Kurman’s work contributes meaningfully to that discussion.
Curtis Frye is the editor of Technology and Society Book Reviews. He is the author of more than 30 books, including Improspectives, his look at applying the principles of improv comedy to business and life. His list includes more than 20 books for Microsoft Press and O’Reilly Media; he has also created more than 50 online training courses for lynda.com. In addition to his writing, Curt is a keynote speaker and entertainer. You can find more information about him at http://www.curtisfrye.com and follow him as @curtisfrye on Twitter.
Review of Quantified: Biosensing Technologies in Everyday Life
Title: Quantified
Editor: Dawn Nafus
Publisher: MIT Press
Copyright: 2016
ISBN13: 978-0-262-52875-7
Length: 280
Price: $27.00
Rating: 92%
I received a promotional copy of this book from the publisher.
Fitness trackers, such as the Nike+ FuelBand, FitBit, and (in some modes) the Apple Watch have grown in popularity over the past several years. Knowledge of one’s activity levels and physical state, even if measured somewhat inaccurately by contemporary sensors, empowers users by providing insights into one’s relative health and activity levels. Other sensors, including implanted devices such as pacemakers, record data more accurately at the cost of greater intrusion upon the self. In Quantified: Biosensing Technologies in Everyday Life, Dawn Nafus, a Senior Research Scientist at Intel Labs, leads an investigation into the anthropoligical implications of new technologies and applications.
Organization and Coverage
Quantified is a collection of papers from the Biosensors in Everyday Life project, a multi-year effort with representatives from several institutions that examined how biosensing technologies, using either”wet” sensors (e.g., saliva, blood, or another bodily fluid) or “dry” sensors (e.g., heart rate, temperature, or blood pressure), impacts individuals and society as a whole.Nafus divided Quantifiedinto three sections: Biosensing and Representation, Institutional Arrangements, and Seeing Like a Builder. The first section, Biosensing and Representation, contains four chapters that examine the Quantified Self (QS) movement from an academic perspective. The first three pieces are, as Nafus admits, written by academics using academic language. I was happy to discover those pieces are accessible to the general reader, which isn’t always the case with articles or dissertations written by specialists for specialists. For non-academics like myself, the first three chapters provide a useful glimpse at how professional scholars approach biosensing as both practice and artifact. The fourth piece, by Wired contributing editor and QS movement leader Gary Wolf, provides a bit of push-back against the strictly academic approach to biosensing.
The Institutional Arrangements section examines QS in terms of regulation, privacy, and autonomy. Images of Jeremy Bentham’s panopticon and assumed observation as presented in Foucault’s Discipline and Punish or Orwell’s 1984 immediately come to mind, but as with every new technology access to information is regulated by differing privacy regimes at the regional, national, and supranational level.
The final section, Seeing Like a Builder, approaches biosensing from the perspective of mechanical engineering, device design, and data management. The first chapter is an edited conversation between Nafus, Deborah Estrin of Cornell Tech in New York City, and Anna de Paula Hanika of Open mHealth about the role of open data in the biosensing movement. Subsequent chapters investigate environmental monitoring, data available through the City of London’s bike rental program, and personal genomics.
Topics of Interest
I’ve written a fair amount about privacy issues and public policy, so I naturally gravitated toward the essays in the Institutional Arrangements section. In the Biosensing in Context chapter, Nissenbaum and Patterson apply the framework of Contextual Integrity to data captured by biosensors. As the name implies, Contextual Integrity addresses the appropriate sharing of information given its context, rather than a coarser set of norms established by law or policy. For individuals taking advantage of QS technologies, they might want to share information with other members of the movement to gain insights from their combined knowledge (called the “n of a billion 1’s” approach elsewhere in the collection). Marking appropriate sharing and usage depends on accurate metadata, which is discussed in Estrin and de Paula Hanika’s exploration of the Open mHealth data framework from the Seeing Like a Builder section.
In Disruption and the Political Economy of Biosensor Data, Fiore-Garland and Neff address the narrative that new technologies favor democracy and democratization. Specifically, they challenge the notion that disruptive change is, by definition, good. As they note:
In their most extreme form, disruption discourses use the concepts of democracy and democratization as ways to describe technological change, and in doing so ascribe social power to technological change in a teleological, deterministic way: if we say a technology disrupts power by bringing democratic access to data or power, then the technology will be democratic.
As rhetorical constructs, “disruption” and “democratization” invoke ideas of personal freedom and autonomy, implicitly denying traditional authorities control over one’s data. As with most business models based on platforms that provide the medium through which data is shared (e.g., Facebook), this argument is inherently self-serving. In the United States, private companies face few barriers to collecting and analyzing individual data, and practically none at all if the data has been shared openly and intentionally. While the interaction of health privacy laws and QS data sharing has yet to be tested, existing precedent argues strongly in favor of an interpretation favorable to companies that want to analyze the data for private gain.
I also enjoyed Marc Böhlen’s chapter Field Notes in Contamination Studies, which chronicled his team’s effort to track water quality in Indonesia. Böhlen’s team had to wrestle with the cultural implications of their work and account for both the expectations of the Indonesian citizens affected by their monitoring as well as the initial suspicions of the Indonesian government. I hadn’t encountered a narrative of this type before, so I appreciated learning more about his team’s work.
Conclusion
Quantified is an excellent first multidisciplinary study of the Quantified Self movement. The field is certain to evolve quickly, but the pieces in this book provide a strong base on which to perform future analysis.
Curtis Frye is the editor of Technology and Society Book Reviews. He is the author of more than 30 books, including Improspectives, his look at applying the principles of improv comedy to business and life. His list includes more than 20 books for Microsoft Press and O’Reilly Media; he has also created more than 40 online training courses for lynda.com. In addition to his writing, Curt is a keynote speaker and entertainer. You can find more information about him at www.curtisfrye.com and follow him as @curtisfrye on Twitter.
Review of This Is Why We Can’t Have Nice Things
Title: This Is Why We Can’t Have Nice Things
Author: Whitney Phillips
Publisher: MIT Press
Copyright: 2015
ISBN13: 978-0-262-02894-3
Length: 248
Price: $24.95
Rating: 90%
I received a promotional copy of this book from the publisher.
Ah, trolls…so much fun to watch when they’re harassing someone you think deserves it and so infuriating when they get under your skin. Whitney Phillips, a lecturer in the department of communications at Humboldt State University, wrote her doctoral dissertation at the University of Oregon on trolling behavior. That dissertation provides the foundation for This Is Why We Can’t Have Nice Things from MIT Press.
What is Trolling?
Phillips notes that the central theme of all trolling is lulz, which she defines as amusement at other peoples’ distress. Proactive schadenfreude, I guess. Trolls are perfectly happy to derive their enjoyment from regular users, public figures, and other trolls. All that matters are the lulz.
One of the first widespread instances of trolling took place when a group of trolls invaded the Usenet newsgroup rec.pets.cats, asking increasingly odd questions and suggesting inappropriate solutions to feline health issues. Regardless of your cat’s respiratory issues, you probably won’t need to aerate it with a .357 hollow-point bullet. I never visited the rec.pets.cats group, but discussion of the trolls’ behavior leaked over to the groups I did participate in. Even the collateral damage was significant. Another early example on Usenet, though one that bordered on spam as well, was “Serdar Argic”, an alias for what appeared to have been multiple posters sending out hundreds of lengthy posts per day denying the Armenian genocide from the early 20th century to groups such as soc.culture.history.
Trolling as Rhetoric
As a communications scholar, Phillips takes on trolling as a rhetorical activity, placing it in a broader cultural context as both product and amplifier of certain aspects of society. Specifically, the masculine drive for domination and as a complement to the 24-hour news cycle.
One reason middle school is such a vile experience for many children is the constant barrage of status games, where kids try to find their place in society at the expense of their classmates. Male trolls, who appear to dominate the landscape, continue this type of aggressive behavior online. They base their rhetorical strategies on the work of Arthur Schopenhauer’s book The Art of Controversy, which melds Aristotelian logic and Socratian dialectic with the Dark Side of the Force. The trolls’ goal is to invoke negative emotions from their targets and, upon eliciting insults or harsh language in response to their own provocations, remind their victims that there’s no room for rudeness in civilized argument and go right back to taking arguments out of context, insulting their opponent, and racking up the lulz.
Phillips also takes issue with conservative media, particularly Fox News and its handling of the Birther controversy, which raised the question as to whether President Barack Obama (usually spoken as Barack HUSSEIN Obama) should release his long-form birth certificate and, after it was released, whether it was a legitimate document. Fox News rode that story hard for much of 2008 and 2009 — you can still hear the echoes if you listen closely. Trolls took advantage of the coverage and some images of Obama to create intentionally offensive and racist memes.
That’s not to say trollish behavior is strictly the purview of Fox News and its ilk. When the Tea Party affiliate in Troy, Michigan had early success turning sentiment against a levy intended to fund the town’s library, an advertising agency devised a campaign purported to be from a group named Safeguarding American Families. The ads expressed opposition to the measure and announced the group would hold a book-burning party. The outrage at this fictitious statement turned sentiment in favor of the ballot measure, which ultimately passed.
Phillips also offers an interesting commentary on trolls as trickster characters. The trickster is known for undercutting the foundations of a society’s cultures or mores but not replacing it with anything. Rather than offer a helpful solution for how things could be done better, tricksters start a fire and walk away. When there are no more lulz to be had, the troll’s work is done.
Transitioning to a Publishable Book
Academic writing is often completely impenetrable to anyone who isn’t a specialist in the author’s field of inquiry. My brother wrote his dissertation on a public policy subject I found interesting, but I couldn’t get through more than three pages of the final document. (Sorry, Doug. I know I said I read the whole thing, but my soup spoon kept creeping toward my eyeballs.) Passive voice is used to maintain a semblance of objectivity and distance, specialized language pervades the text, and rewrites continue until the ultimate academic hazing ritual is complete.
Kind of makes me wonder if dissertation committees haven’t been trolling candidates since the 1500s.
Phillips and her editors did a terrific job of excising unneeded jargon from the text, though some usage and conventions they kept leap off the page. The seemingly ubiquitous forward slash appeared in the section on method/ology, but at least there were no indiscretions on the order of the visual pun When the (M)other is a Fat/Her that William Germano mentions in Getting it Published. That said, while phrasings indicating someone is “gendered” as male have entered the general literature, saying someone was “raced” as Caucasian still seems odd to this generally interested reader.
Conclusions
This Is Why We Can’t Have Nice Things is a terrific introduction to the world of trolling, exploring how trolls put on figurative masks (or literal masks in the case of online anonymity) and generate lulz from those they encounter. As a former competitive debater in high school and college, I’m dismayed by the violence done to my beloved art of rhetorical controversy. Score some lulz for the trolls, I guess. Highly recommended.
Curtis Frye is the editor of Technology and Society Book Reviews. He is the author of more than 30 books, including Improspectives, his look at applying the principles of improv comedy to business and life. His list includes more than 20 books for Microsoft Press and O’Reilly Media; he has also created more than 20 online training courses for lynda.com. In addition to his writing, Curt is a keynote speaker and entertainer. You can find more information about him at www.curtisfrye.com and follow him as @curtisfrye on Twitter.
Book Review: Virtual Economies from MIT Press
Title: Virtual Economies
Authors: Vili Lehdonvirta and Edward Castronova
Publisher: MIT Press
Copyright: 2014
ISBN13: 978-0-262-02725-0
Length: 294
Price: $45.00
Rating: 94%
I received a promotional copy of this book from the publisher.
Designing playable, let alone interesting, video games is difficult. Massive multiplayer games, especially those that allow trade among players, increase design complexity considerably. It’s easy to get lost in the weeds, tweaking prices of individual items or resources to make them more or less accessible to the players and finding the best ways to move money into or out of the game’s economy.
In the face of that complexity, designers must remember their primary goal: earning money for the publisher. Early in Virtual Economies, Lehdonvirta and Castronova lay out the three main objectives of virtual economy design: creating content (both by the producers and the players), attracting and retaining users (attention), and monetizing the game’s virtual resources to create an income stream for the producers. These objectives frame their analysis throughout the book, providing a coherent narrative that emphasizes the importance of designing a system so it generates revenues needed to sustain a game or community.
Unintended Consequences
One source of joy and fear for designers is discovering how their users will creatively exploit the rules of a game to create the experience they want. In fact, the authors point out that designing an inefficient currency might make a game more playable, perhaps because players would develop strategies and tactics to work around the inefficiencies or negotiation and trust issues would lead to interesting player interactions.
You can also try to make virtual money through traditional economic activity. In games, as in any economy, some players search for arbitrage opportunities. When discrepancies arise between the objective value of an item and its perceived value, investors can attempt to make a profit by buying or selling the item. In the stock market, these inefficiencies might arise when a company’s stock is undervalued because investors give too much weight to recent sales data. Investors can buy the stock, hold it until it reaches its proper value, and sell to collect the profits.
Some games offer more straightforward examples, such as allowing users to buy a leather jerkin at a shop in one part of the virtual world and sell it in another region for a significant profit. In either case, players who enjoy this type of activity can take advantage of in-game commercial opportunities.
Faucets and Sinks
Just as players try to acquire game resources, designers must find ways to remove those resources from the game. Maintaining the proper flow of money using macroeconomic policies requires a tricky balancing act between having too much or not enough money in the system. Without income, players can’t buy items they need or desire, but too much money produces in-game inflation that puts even routine purchases out of reach of newer players.
Lehdonvirta and Castronova describe how designers can use money faucets and money sinks to add or remove virtual currency from the game. Money faucets might be as simple as gaining treasure from killing orcs or as complex as arbitrage, while money sinks could include maintenance costs for dwellings, replacing damaged equipment, or securing transport to remote areas.
Virtual Becomes Real
Finally, it’s entirely possible for in-game items and virtual currency to cross over into the real world. Some rare World of Warcraft items command hundreds of dollars on eBay or elsewhere and entire companies in Romania and China make money through “gold mining” (defeating monsters to gain their treasure and selling the gold to other players) or leveling up characters for players who lack either the time or inclination to do it themselves.
Virtual currency can also be used in place of real money for physical transactions, as happened with the Q coin used in Chinese producer Tencent’s game Tencent QQ. A lack of credit cards or easy online payment hampered online commerce in China at the time, so players used Q coins as a medium of exchange. Players transferred Q coins to settle debts or, after the company (at the insistence of the People’s Bank of China) limited the amount that could be transferred at one time, created accounts with standard amounts of Q coins and gave their transaction partners the account’s password.
Conclusions
Virtual Economies combines standard material found in earlier works such as The Economics of Electronic Commerce with new applications told through the eyes of individuals who are both academic analysts and practitioners. Specifically, Lehdonvirta and Castronova provide a substantial overview of traditional economics, such as supply and demand curves and marginal analysis, as well as more recent topics from behavioral economics that help explain why and how individuals deviate from the traditional rational actor model. Add in discussions of what makes for a good currency, how markets function, and macroeconomic issues removes the need for students to buy multiple texts to get the full picture.
Many professors and independent readers will choose to supplement this book’s information with reading packets and online resources, but Virtual Economies could easily stand alone in any context. Highly recommended.
Curtis Frye is the editor of Technology and Society Book Reviews. He is the author of more than 30 books, including Improspectives, his look at applying the principles of improv comedy to business and life. His list includes more than 20 books for Microsoft Press and O’Reilly Media; he has also created more than 20 online training courses for lynda.com. In addition to his writing, Curt is a keynote speaker and entertainer. You can find more information about him at www.curtisfrye.com and follow him as @curtisfrye on Twitter.
Book Review: MOOCs, by Jonathan Haber
Title: MOOCs
Author: Jonathan Haber
Publisher: MIT Press
Copyright: 2014
ISBN13: 978-0-262-52691-3
Length: 227
Price: $13.95
Rating: 90%
I purchased a copy of this book for personal use.
MOOCs, or massive open online courses, offer free classes to anyone with internet access and a willingness to learn. As author Jonathan Haber notes in his recent MIT Press book MOOCs, this educational innovation is working its way through the hype cycle. First touted as an existential threat to traditional “sage on the stage” lecture-based learning, the media has inevitably turned to highlighting the platform’s flaws. How MOOCs evolve from their freemium model remains to be seen.
Haber is an independent writer and researcher who focuses on education technology. This book is based in part on his attempt to re-create a philosophy undergraduate degree by taking free online courses and, where necessary, reading free online textbooks. In MOOCs, Haber captures the essence of the courses, both through his personal experience as well as his encapsulation of the history, current practice, and impact of MOOCs in the social, educational, and corporate realms.
MOOCs as a Learning Environment
The allure of MOOCs centers around their ability to share knowledge with students who might not be able to attend MIT, Georgetown, Stanford, the University of Edinburgh, or other leading institutions. Students can watch videos on their own schedule and, if they’re not concerned about receiving a Statement of Accomplishment or similar recognition, they don’t have to turn in homework or take quizzes on time or at all.
Most videos are 5-10 minutes in length, though some courses that present complex content can have videos that stretch to as long as 45 minutes. Production values range from a professor sitting in their office and facing a camera (often with PowerPoint slides displayed at least part of the time the professor speaks) to videos including animations and location shots that take significant time and budget to produce.
MOOCs offer three general grading policies: quizzes and tests with multiple-choice or fill-in-the-blank questions, computer programs submitted to an automated grader (very common in machine learning courses), and peer grading. There’s no possible way for professors to grade essays or computer programs from thousands of students, so they have to rely on objective mechanisms and peer grading to carry the load. Objective tests are acceptable, but many students dislike peer review even in cases where it’s clearly necessary.
Institutions sponsoring MOOCs go to great length to distinguish students who complete a MOOC from their traditional students. Certificates or Statements of Achievement stress that the holder is not a Wharton/Stanford/MIT student and that the certificate conveys no rights to claim such status. Most MOOCs also use much looser grading standards than traditional courses. For example, students are often allowed multiple attempts at homework or exams and the total grade required to pass a MOOC is often in the 60-70% range. These relaxed requirements make certificates easier to earn and probably increase retention, but the end result is a much less rigorous test of student ability.
Controversies
As with any disruptive technology, MOOCs have generated controversy. The first question is whether, despite their huge enrollments (some courses have more than 100,000 students registered), the courses’ equally huge drop-out rates. As an example, consider the following statistics from the September 5, 2014 session of the Wharton School’s course An Introduction to Financial Accounting, created and taught by Professor Brian Bushee (which I passed, though without distinction):
Number of students enrolled: 111,925
Number of students visiting course: 74,599
Number of students watching at least one lecture: 61,130
Number of students submitting at least one homework: 25,078
Number of students posting on a forum: 3,497
Number of signature track signups: 3,953
Number of students receiving a Statement of Accomplishment: 7,689
Number of students receiving a Statement of Accomplishment with Distinction: 2,788 (included in total receiving SoA)
The ratios that stand out are that only 54.6% of enrolled students watched at least one lecture, 22.4% submitted at least one homework, and 6.87% of students earned a Statement of Accomplishment. That pass rate is fairly typical for these courses. While the percentage seems miniscule, another MOOC professor noted that, even with just 5,000 or so students passing his online course, his 10-week MOOC cohort represented more students than had passed through his classroom in his entire career.
Another concern is who benefits from MOOCs. Students require internet access to view course movies, at least in a way that can be counted by the provider, so there is a significant barrier to entry. Surveys show that the majority of MOOC students are university educated, but there are still large groups from outside the traditional “rich, Western, educated” profile. So, while many students appear to come from richer, Western countries, the courses do overcome barriers to entry.
Finally, MOOCs raise the possibility that courses from “rock star” professors could replace similar offerings taught by professors at other schools. For example, San Jose State University licensed content from a popular Harvard political philosophy course taught on edX with the intention that their own professors would teach to the acquired outline, not their own. The philosophy faculty refused to use the content and wrote an open letter to the Harvard professor complaining about the practice. A similar circumstance led Princeton professor Mitchell Duneier, who created and taught the vastly popular Sociology course offered by Coursera, to decline permission to run his course a second time. Coursera wanted to license his content for sale to other universities, which could save money by mixing video and in-person instruction. Duneier saw this action as a potential excuse to cut states’ higher education funding and pulled his course.
Conclusions
Haber closes the book with a discussion of whether or not he achieved his goal of completing the equivalent of a four-year philosophy degree in one year using MOOCs and other free resources. He argues both for and against the claim (demonstrating a fundamental grasp of sound argumentation, at the very least) and describes his capstone experience: a visit to a philosophy conference. His test was whether he could understand and participate meaningfully in sessions and discussions. I’ll leave his conclusions for you to discover in the book.
I found MOOCs to be an interesting read and a useful summary of the developments surrounding this learning platform. That said, I thought the book could have been pared down a bit. Some of the discussions seemed less concise than they might have been and cutting about 20 pages would have brought the book in line with other entries in the Essential Knowledge series. It’s hard to know what to trim away, though, and 199 small-format pages of main text isn’t much of a burden for an interested reader.
Curtis Frye is the editor of Technology and Society Book Reviews. He is the author of more than 30 books, including Improspectives, his look at applying the principles of improv comedy to business and life. His list includes more than 20 books for Microsoft Press and O’Reilly Media; he has also created more than 20 online training courses for lynda.com. In addition to his writing, Curt is a keynote speaker and entertainer. You can find more information about him at http://www.curtisfrye.com and follow him as @curtisfrye on Twitter.
You must be logged in to post a comment.