Posts Tagged ‘MIT’
Review of The Constitution of Algorithms
Title: The Constitution of Algorithms
Author: Florian Jaton
Publisher: MIT Press
Copyright: 2020
ISBN13: 978-0-262-54214-2
Length: 381
Price: $60.00
Disclosure: I received a promotional copy of this book.
There is a vast literature on the process of writing efficient computer programs, but relatively little has been written about the human processes in which those programs are created. In The Constitution of Algorithms, ethnographer Florian Jaton documents his active participation in multi-year project at a Swiss image processing lab to prepare the ground for further research into the human elements of computer programming.
Preparing the Ground
Algorithms, which Jaton loosely defines as computerized methods of calculation, form the backbone of computer programming. These recipes, when properly developed and tested in the image processing context, yield reliable results that compare favorably with human judgment. He breaks the algorithm generation process into three parts: ground-truthing, programming, and formulating.
Ground-truthing is the process of establishing a data set with known correct characteristics. In Jaton’s case, because he joined a group developing face identification (as opposed to facial recognition) technologies, that meant hiring thousands of individuals through Amazon Mechanical Turk to look at a collection of photos and identify the regions, if any, within each image that contained a human face. The team reviewed these evaluations and discarded those that were incorrect. From that base, team members (including Jaton) could engage in programming to create algorithms to identify faces in the photos, which could be compared to the ground truth arrived at earlier. The final section, on formulation, looks at the mathematical underpinnings of these computational techniques. In a real sense the math is the most fundamental aspect of the project, but it wouldn’t make sense to present it earlier because the intended audience of ethnographers wouldn’t have the necessary context to evaluate that information until ground-truthing and programming were described.
The ground-truthing part of machine learning is particularly interesting…one goal of recognition-driven image processing is to identify meaningful, or salient, aspects of a collection of pixels that an algorithm can use to return a true or false value (face or not a face). Salience is tricky – one promising algorithm that distinguished cats from dogs turned out to have been trained on an image set where most of the cats had a collar with a tag and the dogs did not. The algorithm latched onto those tags and, while that criterion worked well for the training set, it failed when applied to other images. I’m also glad that Jaton called out the human effort required to tag thousands of images or perform similar tasks, which is one of the hidden secrets of many machine learning efforts.
Programming as a (Socio)Logical Process
When describing the programming process using a formal system, the author turns to sociotechnical graphs (STGs), which assign a letter to a specific task in a process and track how the tasks enter, move within, and potentially exit a technical process. The author notes that STGs have fallen by the wayside for this type of analysis, and I can see why. While it might be relatively easy for an analyst deeply embedded in a process to keep track of which letter corresponds to which task, doing so will strain a reader’s working memory and make interpreting the STG difficult. I’m not a sociologist and don’t have a recommendation for an alternative system, but I found the STGs hard to read.
What I did enjoy were the Jaton’s interactions with other members of the lab’s team while he developed and corrected an algorithm to generate rectangles that contained faces identified by workers in the Amazon Mechanical Turk program. The common myth of the lonely programmer fueled by caffeine and spite is, thankfully, mostly fiction. Effective programmers seek out advice and assistance, which the author’s colleagues were happy to provide. The lab director took on an outsider with limited coding skills, but Jaton’s willingness and apparent ability to make beneficial technical contributions surely led to friendly and productive interactions.
Conclusion
The Constitution of Algorithms is adapted from Jaton’s doctoral dissertation, which he admits in the foreword was “cumbersome.” There are a few uncommon phrasings and word substitution errors that made it by the editors, but overall Jaton and his MIT Press colleagues did an excellent job of transforming a specialized academic text into a book intended for a broader audience. I believe The Constitution of Algorithms will be useful for sociologists in general, ethnographers in particular, and other analysts who could benefit from a formal approach to the analysis of software development.
Curtis Frye is the editor of Technology and Society Book Reviews. He is the author of more than 30 books, including more than 20 books for Microsoft Press and O’Reilly Media. He has also created more than 80 online training courses for LinkedIn Learning. He received his undergraduate degree in political science from Syracuse University and his MBA from the University of Illinois. In addition to his writing, Curt is a keynote speaker and entertainer. You can find more information about him at http://www.curtisfrye.com and follow him as @curtisfrye on Twitter.
Review: Experiencing the Impossible
Kuhn, Gustav. Experiencing the Impossible. MIT Press. 2019. 296 pp. ISBN: 978-0-262-03946-8
Author note: I had a presentation proposal accepted for the 2019 Science of Magic Association conference, for which author Gustav Kuhn serves as a committee member. The committee made its decision before I wrote this review.
Experiencing the Impossible: The Science of Magic by Gustav Kuhn, explores the burgeoning field of scientific analysis of magic and its performance. Kuhn is a Reader (rank above Senior Lecturer but below Professor) in Psychology at Goldsmiths, University of London and a member of The Magic Circle.
The science of magic is a relatively new field, but it’s one that lends itself to several different types of research. One way to examine how individuals react to (and, more importantly, interact with) magic is to ask their opinions about what they just saw. In one study, participants were shown a video of a magician making a helicopter disappear and were then asked whether they wanted to see a video showing another trick or to one explaining how the trick was done.
You might be surprised to know that only 40% of the participants said they wanted to know how the trick was done. I personally take that result as a good sign…it means that if a typical person watches a routine on video with no connection to the performer, they will want an explanation less than half the time. If a performer can create an emotional bond with their audience, I believe that percentage will move even more in the performer’s favor.
Kuhn also points to arguments challenging whether audiences believe what they’re seeing is real. In his discussion, he quotes Bucknell University instructor Jason Leddington as arguing that “the audience should actively disbelieve that what they are apparently witnessing is possible.” A magical experience, then, only occurs when it appears that a law of nature is being violated. Similarly, Darwin Ortiz notes in Strong Magic that there is a struggle between our “intellectual belief” and “emotional belief”. We know that what we’re seeing isn’t real, but we want it to be so.
Throughout the rest of Experiencing the Impossible, Kuhn relates other aspects of the scientific examination of stage magic, with chapters discussing the role of processes including memory, visual perception, and the use of heuristics to reason about what you’re seeing. The latter topic draws on Daniel Kahneman’s description of System 1 and System 2 thinking from his book Thinking Fast and Slow. System 1 is the slower, logical, and more careful system where one considers available evidence and comes to a reasoned conclusion. System 2 is much faster, relies on shortcuts, and is easier to fool. The reason many of us lean on System 2 more than we should is that it is less effortful than thinking in depth.
Experiencing the Impossible is an excellent book that captures the state of research in a field of personal interest to me as both a performer and a fan of science. Kuhn’s choice of topics provides and outstanding basis for an initial foray into the science of magic and offers a solid platform for future research. Highly recommended.
Review of Quantified: Biosensing Technologies in Everyday Life
Title: Quantified
Editor: Dawn Nafus
Publisher: MIT Press
Copyright: 2016
ISBN13: 978-0-262-52875-7
Length: 280
Price: $27.00
Rating: 92%
I received a promotional copy of this book from the publisher.
Fitness trackers, such as the Nike+ FuelBand, FitBit, and (in some modes) the Apple Watch have grown in popularity over the past several years. Knowledge of one’s activity levels and physical state, even if measured somewhat inaccurately by contemporary sensors, empowers users by providing insights into one’s relative health and activity levels. Other sensors, including implanted devices such as pacemakers, record data more accurately at the cost of greater intrusion upon the self. In Quantified: Biosensing Technologies in Everyday Life, Dawn Nafus, a Senior Research Scientist at Intel Labs, leads an investigation into the anthropoligical implications of new technologies and applications.
Organization and Coverage
Quantified is a collection of papers from the Biosensors in Everyday Life project, a multi-year effort with representatives from several institutions that examined how biosensing technologies, using either”wet” sensors (e.g., saliva, blood, or another bodily fluid) or “dry” sensors (e.g., heart rate, temperature, or blood pressure), impacts individuals and society as a whole.Nafus divided Quantifiedinto three sections: Biosensing and Representation, Institutional Arrangements, and Seeing Like a Builder. The first section, Biosensing and Representation, contains four chapters that examine the Quantified Self (QS) movement from an academic perspective. The first three pieces are, as Nafus admits, written by academics using academic language. I was happy to discover those pieces are accessible to the general reader, which isn’t always the case with articles or dissertations written by specialists for specialists. For non-academics like myself, the first three chapters provide a useful glimpse at how professional scholars approach biosensing as both practice and artifact. The fourth piece, by Wired contributing editor and QS movement leader Gary Wolf, provides a bit of push-back against the strictly academic approach to biosensing.
The Institutional Arrangements section examines QS in terms of regulation, privacy, and autonomy. Images of Jeremy Bentham’s panopticon and assumed observation as presented in Foucault’s Discipline and Punish or Orwell’s 1984 immediately come to mind, but as with every new technology access to information is regulated by differing privacy regimes at the regional, national, and supranational level.
The final section, Seeing Like a Builder, approaches biosensing from the perspective of mechanical engineering, device design, and data management. The first chapter is an edited conversation between Nafus, Deborah Estrin of Cornell Tech in New York City, and Anna de Paula Hanika of Open mHealth about the role of open data in the biosensing movement. Subsequent chapters investigate environmental monitoring, data available through the City of London’s bike rental program, and personal genomics.
Topics of Interest
I’ve written a fair amount about privacy issues and public policy, so I naturally gravitated toward the essays in the Institutional Arrangements section. In the Biosensing in Context chapter, Nissenbaum and Patterson apply the framework of Contextual Integrity to data captured by biosensors. As the name implies, Contextual Integrity addresses the appropriate sharing of information given its context, rather than a coarser set of norms established by law or policy. For individuals taking advantage of QS technologies, they might want to share information with other members of the movement to gain insights from their combined knowledge (called the “n of a billion 1’s” approach elsewhere in the collection). Marking appropriate sharing and usage depends on accurate metadata, which is discussed in Estrin and de Paula Hanika’s exploration of the Open mHealth data framework from the Seeing Like a Builder section.
In Disruption and the Political Economy of Biosensor Data, Fiore-Garland and Neff address the narrative that new technologies favor democracy and democratization. Specifically, they challenge the notion that disruptive change is, by definition, good. As they note:
In their most extreme form, disruption discourses use the concepts of democracy and democratization as ways to describe technological change, and in doing so ascribe social power to technological change in a teleological, deterministic way: if we say a technology disrupts power by bringing democratic access to data or power, then the technology will be democratic.
As rhetorical constructs, “disruption” and “democratization” invoke ideas of personal freedom and autonomy, implicitly denying traditional authorities control over one’s data. As with most business models based on platforms that provide the medium through which data is shared (e.g., Facebook), this argument is inherently self-serving. In the United States, private companies face few barriers to collecting and analyzing individual data, and practically none at all if the data has been shared openly and intentionally. While the interaction of health privacy laws and QS data sharing has yet to be tested, existing precedent argues strongly in favor of an interpretation favorable to companies that want to analyze the data for private gain.
I also enjoyed Marc Böhlen’s chapter Field Notes in Contamination Studies, which chronicled his team’s effort to track water quality in Indonesia. Böhlen’s team had to wrestle with the cultural implications of their work and account for both the expectations of the Indonesian citizens affected by their monitoring as well as the initial suspicions of the Indonesian government. I hadn’t encountered a narrative of this type before, so I appreciated learning more about his team’s work.
Conclusion
Quantified is an excellent first multidisciplinary study of the Quantified Self movement. The field is certain to evolve quickly, but the pieces in this book provide a strong base on which to perform future analysis.
Curtis Frye is the editor of Technology and Society Book Reviews. He is the author of more than 30 books, including Improspectives, his look at applying the principles of improv comedy to business and life. His list includes more than 20 books for Microsoft Press and O’Reilly Media; he has also created more than 40 online training courses for lynda.com. In addition to his writing, Curt is a keynote speaker and entertainer. You can find more information about him at www.curtisfrye.com and follow him as @curtisfrye on Twitter.
Book Review: Virtual Economies from MIT Press
Title: Virtual Economies
Authors: Vili Lehdonvirta and Edward Castronova
Publisher: MIT Press
Copyright: 2014
ISBN13: 978-0-262-02725-0
Length: 294
Price: $45.00
Rating: 94%
I received a promotional copy of this book from the publisher.
Designing playable, let alone interesting, video games is difficult. Massive multiplayer games, especially those that allow trade among players, increase design complexity considerably. It’s easy to get lost in the weeds, tweaking prices of individual items or resources to make them more or less accessible to the players and finding the best ways to move money into or out of the game’s economy.
In the face of that complexity, designers must remember their primary goal: earning money for the publisher. Early in Virtual Economies, Lehdonvirta and Castronova lay out the three main objectives of virtual economy design: creating content (both by the producers and the players), attracting and retaining users (attention), and monetizing the game’s virtual resources to create an income stream for the producers. These objectives frame their analysis throughout the book, providing a coherent narrative that emphasizes the importance of designing a system so it generates revenues needed to sustain a game or community.
Unintended Consequences
One source of joy and fear for designers is discovering how their users will creatively exploit the rules of a game to create the experience they want. In fact, the authors point out that designing an inefficient currency might make a game more playable, perhaps because players would develop strategies and tactics to work around the inefficiencies or negotiation and trust issues would lead to interesting player interactions.
You can also try to make virtual money through traditional economic activity. In games, as in any economy, some players search for arbitrage opportunities. When discrepancies arise between the objective value of an item and its perceived value, investors can attempt to make a profit by buying or selling the item. In the stock market, these inefficiencies might arise when a company’s stock is undervalued because investors give too much weight to recent sales data. Investors can buy the stock, hold it until it reaches its proper value, and sell to collect the profits.
Some games offer more straightforward examples, such as allowing users to buy a leather jerkin at a shop in one part of the virtual world and sell it in another region for a significant profit. In either case, players who enjoy this type of activity can take advantage of in-game commercial opportunities.
Faucets and Sinks
Just as players try to acquire game resources, designers must find ways to remove those resources from the game. Maintaining the proper flow of money using macroeconomic policies requires a tricky balancing act between having too much or not enough money in the system. Without income, players can’t buy items they need or desire, but too much money produces in-game inflation that puts even routine purchases out of reach of newer players.
Lehdonvirta and Castronova describe how designers can use money faucets and money sinks to add or remove virtual currency from the game. Money faucets might be as simple as gaining treasure from killing orcs or as complex as arbitrage, while money sinks could include maintenance costs for dwellings, replacing damaged equipment, or securing transport to remote areas.
Virtual Becomes Real
Finally, it’s entirely possible for in-game items and virtual currency to cross over into the real world. Some rare World of Warcraft items command hundreds of dollars on eBay or elsewhere and entire companies in Romania and China make money through “gold mining” (defeating monsters to gain their treasure and selling the gold to other players) or leveling up characters for players who lack either the time or inclination to do it themselves.
Virtual currency can also be used in place of real money for physical transactions, as happened with the Q coin used in Chinese producer Tencent’s game Tencent QQ. A lack of credit cards or easy online payment hampered online commerce in China at the time, so players used Q coins as a medium of exchange. Players transferred Q coins to settle debts or, after the company (at the insistence of the People’s Bank of China) limited the amount that could be transferred at one time, created accounts with standard amounts of Q coins and gave their transaction partners the account’s password.
Conclusions
Virtual Economies combines standard material found in earlier works such as The Economics of Electronic Commerce with new applications told through the eyes of individuals who are both academic analysts and practitioners. Specifically, Lehdonvirta and Castronova provide a substantial overview of traditional economics, such as supply and demand curves and marginal analysis, as well as more recent topics from behavioral economics that help explain why and how individuals deviate from the traditional rational actor model. Add in discussions of what makes for a good currency, how markets function, and macroeconomic issues removes the need for students to buy multiple texts to get the full picture.
Many professors and independent readers will choose to supplement this book’s information with reading packets and online resources, but Virtual Economies could easily stand alone in any context. Highly recommended.
Curtis Frye is the editor of Technology and Society Book Reviews. He is the author of more than 30 books, including Improspectives, his look at applying the principles of improv comedy to business and life. His list includes more than 20 books for Microsoft Press and O’Reilly Media; he has also created more than 20 online training courses for lynda.com. In addition to his writing, Curt is a keynote speaker and entertainer. You can find more information about him at www.curtisfrye.com and follow him as @curtisfrye on Twitter.
Review of Memes in Digital Culture, by Limor Shifman (MIT Press)
In addition to my other ventures, I’m the editor and lead reviewer of Technology and Society Book Reviews. Some of the books I cover related directly to my work as an improviser and speaker — Memes in Digital Culture is just such a work. This review originally appeared on January 5, 2014.
Title: Memes in Digital Culture
Author: Limor Shifman
Publisher: MIT Press
Copyright: 2014
ISBN13: 978-0-262-52543-5
Length: 200
Price: $13.95
Rating: 94%
I purchased a copy of this book for personal use.
I’m a fan of the Essential Knowledge Series from MIT Press. These small-format books provide useful information on a variety of digital culture topics, including Information and the Modern Corporation and Intellectual Property Strategy, which I have also reviewed. In Memes in Digital Culture, author Limor Shifman of The Hebrew University of Jerusalem develops a framework for analyzing memes in the networked age.
Memes as Entities
An early meme familiar to Americans and other western audiences is Kilroy Was Here, attributed to James J. Kilroy, a Massachussetts shipyard inspector. More recent examples are the Pepper Spraying Cop, Scumbag Steve, and Socially Awkward Penguin. Richard Dawkins introduced memes in his 1976 book The Selfish Gene. In that work, Dawkins argued that memes are small units of culture transmitted amongst a population, like genes. Dawkins’s framework posits that memes have three main characteristics: longevity, fecundity, and copy fidelity. Shifman points out that the Internet enhances all three of those aspects, allowing exact digital copies of memes to spread quickly and to stay around longer in the Facebook timelines, Twitter feeds, and hard drives of users.
Dawkins created his original theory of memes before networking technologies became commonplace. Shifman extends his work by defining an Internet meme as:
(a) a group of digital items sharing common characteristics of content, form, and/or stance, which (b) were created with awareness of each other, and (c) were circulated, imitated, and/or transformed via the Internet by many users.
Many commentators use the terms meme and viral interchangeably, but Shifman argues they’re two very different things. Her definition of memes emphasizes the transformational aspect of creation and sharing, such as adding a new caption to a common image. Korean rapper PSY’s viral video “Gangnam Style”, which was viewed more than one billion times on YouTube, was immensely popular but not, at least at first, a meme. Later take-offs on the theme, such as an homage to the wealthy former U.S. presidential candidate called “Romney Style”, marked the transition from viral video to meme.
Analyzing Memes
Shifman argues that memes can be analyzed in three ways: content, form, and stance; and interpreted through economic, social, and cultural/aesthetic lenses. Content, form, and stance capture the creative elements of a meme, including the contributor’s attitude toward the subject matter and subtext inherent in the creation. For example, the “Leave Britney Alone” video creator puts forward, through subtext, the idea that it’s OK to be a gay male wearing a wig and eyeliner. It’s also possible to break memes down into genres, including Reaction Photoshops (Photoshop-edited images are sometimes called “shoops”), Photo Fads, Flash Mobs, Recut Trailers, Misheard Lyrics, Bad Dubbing, and LOL Cats.
Chapter 8 focuses on memes in the political realm, particularly involving citizen participation. In the U.S., that trend included the Occupy Wall Street movement and the “I am the 99%” percent meme. Shifman and her colleagues also investigated memes in France, China, Israel, and Egypt. Because many regimes monitor or censor the internet in general and social media in particular, activists use code words to obscure their discussions and intentions. Memes also bring the difference between what Erving Goffman called the frontstage and backstage political venues into sharper relief. The frontstage represents the public face of a politician or campaign, while the backstage represents the venues where the real work gets done.
Final Thoughts
Like the other Essential Knowledge Series books I’ve had the pleasure to review, Dr. Shifman’s Memes in Digital Culture provides a solid overview on an interesting topic. I can easily see academics adopting her text as required reading for digital media analysis courses and executives reading it to gain insights into meme culture.
Curtis Frye is the editor of Technology and Society Book Reviews. He is the author of more than 30 books, including Improspectives, his look at applying the principles of improv comedy to business and life. His list includes more than 20 books for Microsoft Press and O’Reilly Media; he has also created over a dozen online training courses for lynda.com. In addition to his writing, Curt is a keynote speaker and entertainer. You can find more information about him at www.curtisfrye.com.