The Hacker Way

On December 21, 2012, Blake Ross—the boy genius behind Firefox and currently Facebook’s Director of Product—posted this to his Facebook page:

Some friends and I built this new iPhone app over the last 12 days. Check it out and let us know what you think!

The new iPhone app was Facebook Poke. One of the friends was Mark Zuckerberg, Facebook’s founder and CEO. The story behind the app’s speedy development and Zuckerberg’s personal involvement holds lessons for the practice of digital humanities in colleges and universities.

Late last year, Facebook apparently entered negotiations with the developers of Snapchat, an app that lets users share pictures and messages that “self-destruct” shortly after opening. Feeding on user worries about Facebook’s privacy policies and use and retention of personal data, in little more than a matter of weeks, Snapchat had taken off among young people. By offering something Facebook didn’t—confidence that your sexts wouldn’t resurface in your job search—Snapchat exploded.

It is often said that Facebook doesn’t understand privacy. I disagree. Facebook understands privacy all too well, and it is willing to manipulate its users’ privacy tolerances for maximum gain. Facebook knows that every privacy setting is its own niche market, and if its privacy settings are complicated, it’s because the tolerances of its users are so varied. Facebook recognized that Snapchat had filled an unmet need in the privacy marketplace, and tried first to buy it. When that failed, it moved to fill the niche itself.

Crucially for our story, Facebook’s negotiations with Snapchat seem to have broken down just weeks before a scheduled holiday moratorium for submissions to Apple’s iTunes App Store. If Facebook wanted to compete over the holiday break (prime time for hooking up, on social media and otherwise) in the niche opened up by Snapchat, it had to move quickly. If Facebook couldn’t buy Snapchat, it had to build it. Less than two weeks later, Facebook Poke hit the iTunes App Store.

Facebook Poke quickly rose to the top of the app rankings, but has since fallen off dramatically in popularity. Snapchat remains among iTunes’ top 25 free apps. Snapchat continues adding users and has recently closed a substantial round of venture capital funding. To me Snapchat’s success in the face of such firepower suggests that Facebook’s users are becoming savvier players in the privacy marketplace. Surely there are lessons in this for those of us involved in digital asset management.

Yet there is another lesson digital humanists and digital librarians should draw from the Poke story. It is a lesson that depends very little on the ultimate outcome of the Poke/Snapchat horse race. It is a lesson about digital labor.

hackerMark Zuckerberg is CEO of one of the largest and most successful companies in the world. It would not be illegitimate if he decided to spend his time delivering keynote speeches to shareholders and entertaining politicians in Davos. Instead, Zuckerberg spent the weeks between Thanksgiving and Christmas writing code. Zuckerberg identified the Poke app as a strategic necessity for the service he created, and he was not too proud to roll up his sleeves and help build it. Zuckerberg explained the management philosophy behind his “do it yourself” impulse in the letter he wrote to shareholders prior to Facebook’s IPO. In a section of the letter entitled “The Hacker Way,” Zuckerberg wrote:

The Hacker Way is an approach to building that involves continuous improvement and iteration. Hackers believe that something can always be better, and that nothing is ever complete. They just have to go fix it – often in the face of people who say it’s impossible or are content with the status quo….

Hacking is also an inherently hands-on and active discipline. Instead of debating for days whether a new idea is possible or what the best way to build something is, hackers would rather just prototype something and see what works. There’s a hacker mantra that you’ll hear a lot around Facebook offices: “Code wins arguments.”

Hacker culture is also extremely open and meritocratic. Hackers believe that the best idea and implementation should always win – not the person who is best at lobbying for an idea or the person who manages the most people….

To make sure all our engineers share this approach, we require all new engineers – even managers whose primary job will not be to write code – to go through a program called Bootcamp where they learn our codebase, our tools and our approach. There are a lot of folks in the industry who manage engineers and don’t want to code themselves, but the type of hands-on people we’re looking for are willing and able to go through Bootcamp.

Now, listeners to Digital Campus will know that I am no fan of Facebook, which I abandoned years ago, and I’m not so naive as to swallow corporate boilerplate hook, line, and sinker. Nevertheless, it seems to me that in this case Zuckerberg was speaking from the heart and the not the wallet. As Business Insider’s Henry Blodget pointed out in the days of Facebook’s share price freefall immediately following its IPO, investors should have read Zuckerberg’s letter as a warning: he really believes this stuff. In the end, however, whether it’s heartfelt or not, or whether it actually reflects the reality of how Facebook operates, I share my colleague Audrey Watters’ sentiment that “as someone who thinks a lot about the necessity for more fearlessness, openness, speed, flexibility and real social value in education (technology) — and wow, I can’t believe I’m typing this — I find this part of Zuckerberg’s letter quite a compelling vision for shaking up a number of institutions (and not just “old media” or Wall Street).”

There is a widely held belief in the academy that the labor of those who think and talk is more valuable than the labor of those who build and do. Professorial contributions to knowledge are considered original research while librarians and educational technologists’ contributions to these endeavors are called service. These are not merely imagined prejudices. They are manifest in human resource classifications and in the terms of contracts that provide tenure to one group and, often, at will employment to the other.

Digital humanities is increasingly in the public eye. The New York Times, the Los Angeles Times, and the Economist all have published feature articles on the subject recently. Some of this coverage has been positive, some of it modestly skeptical, but almost all of it has focused on the kinds of research questions digital humanities can (or maybe cannot) answer. How digital media and methods have changed humanities knowledge is an important question. But practicing digital humanists understand that an equally important aspect of the digital shift is the extent to which digital media and methods have changed humanities work and the traditional labor and power structures of the university. Perhaps most important has been the calling into question of the traditional hierarchy of academic labor which placed librarians “in service” to scholars. Time and again, digital humanities projects have succeeded by flattening distinctions and divisions between faculty, librarians, technicians, managers, and students. Time and again, they have failed by maintaining these divisions, by honoring traditional academic labor hierarchies rather than practicing something like the hacker way.

Blowing up the inherited management structures of the university isn’t an easy business. Even projects that understand and appreciate the tensions between these structures and the hacker way find it difficult to accommodate them. A good example of an attempt at such an accommodation has been the “community source” model of software development advanced by some in the academic technology field. Community source’s successes and failures, and the reasons for them, illustrate just how important it is to make room for the hacker way in digital humanities and academic technology projects.

As Brad Wheeler wrote in EDUCAUSE Review in 2007, a community source project is distinguished from more generic open source models by the fact that “many of the investments of developers’ time, design, and project governance come from institutional contributions by colleges, universities, and some commercial firms rather than from individuals.” Funders of open source software in the academic and cultural heritage fields have often preferred the community source model assuming that, because of high level institutional commitments, the projects it generates will be more sustainable than projects that rely mainly on volunteer developers. In these community source projects, foundations and government funding agencies put up major start-up funding on the condition that recipients commit regular staff time—”FTEs”—to work on the project alongside grant funded staff.

The community source model has proven effective in many cases. Among its success stories are Sakai, an open source learning management system, and Kuali, an open source platform for university administration. Just as often, however, community source projects have failed. As I argued in a grant proposal to the Library of Congress for CHNM’s Omeka + Neatline collaboration with UVa’s Scholars’ Lab, community source projects have usually failed in one of two ways: either they become mired in meetings and disagreements between partner institutions and never really get off the ground in the first place, or they stall after the original source of foundation or government funding runs out. In both cases, community source failures lie in the failure to win the “hearts and minds” of the developers working on the project, in the failure to flatten traditional hierarchies of academic labor, in the failure to do it “the hacker way.”

In the first case—projects that never really get off the ground—developers aren’t engaged early enough in the process. Because they rely on administrative commitments of human resources, conversations about community source projects must begin with administrators rather than developers. These collaborations are born out of meetings between administrators located at institutions that are often geographically distant and culturally very different. The conversations that result can frequently end in disagreement. But even where consensus is reached, it can be a fragile basis for collaboration. We often tend to think of collaboration as shared decision making. But as I have said in this space before, shared work and shared accomplishment are more important. As Zuckerberg has it, digital projects are “inherently hands-on and active”; that “instead of debating for days whether a new idea is possible or what the best way to build something is, hackers would rather just prototype something and see what works”; that “the best idea and implementation should always win—not the person who is best at lobbying for an idea or the person who manages the most people.” That is, the most successful digital work occurs at the level of work, not at the level of discussion, and for this reason hierarchies must be flattened. Everyone has to participate in the building.

In the second case—projects that stall after funding runs out—decisions are made for developers (about platforms, programming languages, communication channels, deadlines) early on in the planning process that may deeply affect their work at the level of code sometimes several months down the road. These decisions can stifle developer creativity or make their work unnecessarily difficult, both of which can lead to developer disinterest. Yet experience both inside and outside of the academy shows us that what sustains an open source project after funding runs out is the personal interest and commitment of developers. In the absence of additional funding, the only thing that will get bugs fixed and forum posts answered are committed developers. Developer interest is often a project’s best sustainability strategy. As Zuckerberg says, “hackers believe that something can always be better, and that nothing is ever complete.” But they have to want to do so.

When decisions are made for developers (and other “doers” on digital humanities and academic technology projects such as librarians, educational technologists, outreach coordinators, and project managers), they don’t. When they are put in a position of “service,” they don’t. When traditional hierarchies of academic labor are grafted onto digital humanities and academic technology projects that owe their success as much to the culture of the digital age as they do to the culture of the humanities, they don’t.

Facebook understands that the hacker way works best in the digital age. Successful digital humanists and academic technologists do too.

[This post is based on notes for a talk I was scheduled to deliver at a NERCOMP event in Amherst, Massachusetts on Monday, February 11, 2013. The title of that talk was intended to be “‘Not My Job’: Digital Humanities and the Unhelpful Hierarchies of Academic Labor.” Unfortunately, the great Blizzard of 2013 kept me away. Thankfully, I have this blog, so all is not lost.]

[Image credit: Thomas Hawk]

Take me to your leader: The importance of knowing who's in charge

You’ve probably been there. A new job, a new project team, a new client. A great first meeting. Everyone is invited to talk, to listen, to contribute. Everyone is assured that their voices will be heard, their concerns addressed, their ideas taken seriously.

Fast forward a week, a month, a year. One by one, those voices have been silenced, those concerns dismissed, those ideas undermined. What remains are the ideas and concerns of the person who (it has now become clear) is in charge.

To do their jobs effectively, members of a project team need to know who the decision maker is. We all like democracy, those of us in education and cultural heritage especially so. If it’s truly a democracy, great. But if it’s a dictatorship, people would rather know from the outset than be led down a rhetorical primrose path of “democracy,” “consensus,” and “collaboration” only to have the rug pulled out from under them when the decision maker finally decides to assert his or her will.

If you are the decision maker, let us know. Anything less treats team members like children and wastes everybody’s time. What’s worse, it makes for shortsighted, haphazard, second-rate work product.

Ancient Religion, Modern Technology: Takeaways

[Last month, I posted notes from my keynote at Brown University’s Ancient Religion, Modern Technology workshop. I was also fortunate to be invited to offer some concluding observations to the excellent group assembled there. Here they are, my very rough notes.]

1) The community of scholars interested in ancient religion has done some extraordinary digital work with extraordinarily little money. The work of Michael Penn, David Michelson, and other members of “Syraic Mafia” (as that small group of scholars working on ancient Syriac manuscripts was dubbed during the workshop) through projects such as the Syriac Reference Portal is a particularly striking example of this.

2) I was struck by the extent to which this community has identified real problems/questions and how incredibly successful it has been in answering them. It seems to me that integrity and availability of the texts is a much bigger problem in classics than in more modern history. But it is also a problem that lends itself to digital solutions: edge matching, name authority, calendar disambiguation, and handwriting recognition are clearly defined problems with clearly identifiable (if difficult) solutions. These are what we might call “instrumental” uses of technology, and they seem less current among the broader community of digital humanists. I think that broader community could benefit from a renewed focus on these kind of problems in our own domains, but I think this community could stand a broader discussion about the possibly “squishier” things that occupy much of the rest of digital humanities: new modes of scholarly communication, digital pedagogy, open access, public humanities and social media.

3) Digital humanities has too often gone searching for the elusive “one” database/tool/standard. To a large extent, this search has been driven by our funders, who are reluctant to fund multiple projects with the same ends, but we have been complicit as well. I caution this group to avoid this trap. The notion of the “one” doesn’t represent the way either technology or the humanities proceed. Technologies and ideas exist in dialog with one another. Put crudely, they compete in the marketplace. There isn’t one smartphone, and there isn’t one book about Alexander the Great, and we wouldn’t want there to be. In the same way, I don’t think there has to be or even should be one database/tool/standard for the things we want to accomplish. I don’t think we or our funders should be so concerned about duplication of effort or projects that “do the same thing.” We should strive for interoperability in our products, but we shouldn’t wish for just one. I understand where the impulse comes from. It is messy and unsatisfying to have overlapping and competing databases, standards, and tools. But what are the humanities if not messy and, ultimately, unsatisfying? These are the things that spur us on to new research.

Nobody cares about the library: How digital technology makes the library invisible (and visible) to scholars

There is a scene from the first season of the television spy drama, Chuck, that takes place in a library. In the scene, our hero and unlikely spy, Chuck, has returned to his alma mater, Stanford, to find a book his former roommate, Bryce, has hidden in the stacks as a clue. All Chuck has to go on is a call number scribbled on a scrap of paper.

When he arrives in the stacks, he finds the book is missing and assumes the bad guys have beat him to it. Suddenly, however, Chuck remembers back to his undergraduate days of playing tag in the stacks with Bryce with plastic dart guns. Bryce had lost his weapon and Chuck had cornered him. Just then, Bryce reached beneath a shelf where he had hidden an extra gun, and finished Chuck off. Remembering this scene, Chuck reaches beneath the shelf where the book should have been shelved and finds that this time around Bryce has stashed a computer disk.

I like this clip because it illustrates how I think most people—scholars, students, geeks like Chuck—use the library. I don’t mean as the setting for covert intelligence operations or even undergraduate dart gun games. Rather, I think it shows that patrons take what the library offers and then use those offerings in ways librarians never intended. Chuck and his team (and the bad guys) enter the library thinking they are looking for a book with a given call number only to realize that Bryce has repurposed the Library of Congress Classification system to hide his disk. It reinforces the point when, at the end of the scene, the writers play a joke at the expense of a hapless librarian, who, while the action is unfolding, is trying to nail Chuck for some unpaid late fees. When the librarian catches up with Chuck, and Chuck’s partner Sarah shouts “Run!” she is not, as the librarian thinks, worried about late fees but about the bad guys with guns standing behind him. Chuck and his friends don’t care about the library. They use the library’s resources and tools in their own ways, to their own ends, and the concerns of the librarians are a distant second to the concerns that really motivate them.

In some ways, this disconnect between librarians (and their needs, ways of working, and ways of thinking) and patrons (and their needs and ways of working) is only exacerbated by digital technology. In the age of Google Books, JSTOR, Wikipedia, and ever expanding digital archives, librarians may rightly worry about becoming invisible to scholars, students, and other patrons—that “nobody cares about the library.” Indeed, many faculty and students may wonder just what goes on in that big building across the quad. Digital technology has reconfigured the relationship between librarians and researchers. In many cases, this relationship has grown more distant, causing considerable consternation about the future of libraries. Yet, while it is certainly true that digital technology has made libraries and librarians invisible to scholars in some ways, it is also true, that in some areas, digital technology has made librarians increasingly visible, increasingly important.

To try to understand the new invisibility/visibility of the library in the digital age let’s consider a few examples on both sides.

The invisible library

Does it matter that Chuck couldn’t care less about call numbers and late fees or about controlled vocabularies, metadata schemas, circulation policies, or theories collections stewardship? I’m here to argue that it doesn’t. Don’t get me wrong. I’m not arguing that these things don’t matter or that the library should be anything but central to the university experience. But to play that central role doesn’t mean the library has to be uppermost in everyone’s mind. In the digital age, in most cases, the library is doing its job best when it is invisible to its patrons.

What do I mean by that? Let me offer three instances where the library should strive for invisibility, three examples of “good” invisibility:

Search: We tend to evaluate the success of our web pages with metrics like numbers of page views, time spent per page, and bounce rate. But with search the metrics are reversed: We don’t want people looking at lots of pages or spending a lot of time on our websites. We want the library web infrastructure to be essentially invisible, or at least to be visible for only a very short period of time. What we really want with search is to allow patrons to get in and get out as quickly as possible with just what they were looking for.

APIs and 3rd party mashups: In fact, we may not want people visiting library websites at all. What would be even better would be to provide direct computational access to collections databases so people could take the data directly and use it in their own applications elsewhere. Providing rich APIs (Application Programming Interfaces) would make the library even more invisible. People wouldn’t even come to our websites to access content, but they would get from us what they need where they need it.

Social media: Another way in which we may want to discourage people from coming to library websites is by actively placing content on other websites. To the extent that a small or medium-sized library wants to reach general audiences, it has a better chance of doing so in places where that audience already is. Flickr Commons is one good example of this third brand of invisibility. Commentors on Flickr Commons may never travel back to the originating library’s website, but they may have had a richer interaction with that library’s content because of it.

The visible library

The experience of the digital humanities shows that the digital can also bring scholars into ever closer and more substantive collaboration with librarians. It is no accident that many if not most successful digital humanities centers are based in univeristy libraries. Much of digital humanities is database driven, but an empty database is a useless database. Librarians have the stuff to fill digital humanists’ databases and the expertise to do so intelligently.

Those library-based digital humanities centers tend to skew towards larger universities. How can librarians at medium-sized or even small universities library help the digital humanities? Our friend Wally Grotophorst, Associate University Librarian for Digital Programs and Systems at Mason, provides some answers in his brief but idea-rich post, What Happens To The Mid-Major Library?. I’ll point to just three of Wally’s suggestions:

Focus on special collections, that is anything people can’t get from shared sources like Google Books, JSTOR, LexisNexis, HathiTrust. Not only do special collections differentiate you from other institutions online, they provide unique opportunities for researchers on campus.

Start supporting data-driven research in addition to the bibliographic-driven kind that has been the traditional bread and butter of libraries. Here I’d suggest tools and training for database creation, social network analysis, and simple text mining.

Start supporting new modes of scholarly communication—financially, technically, and institutionally. Financial support for open access publishing of the sort prescribed by the Compact for Open-Access Publishing Equity is one ready model. Hosting, supporting, and publicizing scholarly and student blogs as an alternative or supplement to existing learning management systems (e.g. Blackboard) is another. University Library/University Press collaboration, like the University of Michigan’s MPublishing reorganization, is a third.


In an information landscape increasingly dominated by networked resources, both sides of the librarian-scholar/student relationship must come to terms with a new reality that is in some ways more distant and in others closer than ever before. Librarians must learn to accept invisibility where digital realities demand it. Scholars must come to understand the centrality of library expertise and accept librarians as equal partners as more and more scholarship becomes born digital and the digital humanities goes from being a fringe sub-discipline to a mainstream pursuit. Librarians in turn must expand those services like special collections, support for data-driven research, and access to new modes of publication that play to their strengths and will best serve scholars. We all have to find new ways, better ways to work together.

So, where does that leave Chuck? Despite not caring about our work, Chuck actually remembers the library fondly as a place of play. Now maybe we don’t want people playing dart guns in the stacks. But applied correctly, digital technology allows our users and our staff to play, to be creative, and in their own way to make the most of the library’s rich resources.

Maybe the Chucks of the world do care about the library after all.

[This post is based on a talk I delivered at American University Library’s Digital Futures Forum. Thanks to @bill_mayer for his kind invitation. In memory of my dear friend Bob Griffith, who did too much to come and hear this lousy talk.]

Game Change: Digital Technology and Performative Humanities

“Game changing” is a term we hear a lot in digital humanities. I have used it myself. But try, as I was asked to do for a recent talk at Brown University’s Ancient Religion, Modern Technology workshop, to name a list of truly game-changing developments wrought by digital humanities. I come up short.

Struggling with this problem, I found it useful in preparing my talk to examine the origins or at least the evolution of the term. I’m sure it’s not the earliest use, but the first reference I could find to “game changing” (as an adjective) in Google Books was from a 1953 Newsweek article, not surprisingly about baseball, specifically in reference to how Babe Ruth and his mastery of the home run changed the game of baseball. This is a telling, if serendipitous, example, because baseball fans will know that Babe Ruth really did change baseball, in that the game was played one way before he joined the Red Sox in 1914 and another way really ever since. Babe Ruth’s veritable invention of the home run changed baseball forever, from the “small ball” game of infield singles, sacrifice bunts, and strategic base running of the late-19th and early-20th centuries to the modern game dominated by power and strength. As Baseball Magazine put it none-too-flatteringly in 1921: “Babe has not only smashed all records, he has smashed the long-accepted system of things in the batting world and on the ruins of the system has erected another system or rather lack of system whose dominant quality is brute force.” From what I could gather from my quick survey of Google Books, for the better part of the next thirty years, the term is mainly used in just this way, in the context of sports, literally to talk about how games have been changed.

In the 1980s, however, the term seems to take on a new meaning, a new frequency and a new currency. Interestingly, the term’s new relevance seems to be tied to a boom in business and self-help books. This probably comes as no surprise: I think most of us will associate the term today with the kind of management-speak taught in business schools and professional development workshops. In this context, it’s used metaphorically to recommend new strategies for success in sales, finance, or one’s own career. It’s still used in the context of sports, but most of what I found throughout the 80s and 90s relates to business and career. Going back to our graph, however, we see that it’s not until the turn of this century that term gets its big boost. Here we see another shift in its usage, from referring to business in general to the technology business in particular. This also comes as no surprise, considering the digital communications revolution that tooks shape during the five years on either side of the new millenium. Here we see a new word appended to the phrase: game-changing technology. And even more specifically, the phrase seems to become bound up with a fourth word: innovation. Today use of the term has been extended even further to be used in all manner of cultural discourse from politics to university-press-published humanities texts.

But when we use the term in these other arenas—i.e. in ways other than in the literal sense of changing the way a sport or game is played—in order for it to be meaningful, in order for it to be more than jargon and hyperbole, in order for the “game-changing” developments we’re describing to live up to the description, it seems to me that they have to effect a transformation akin to the one Babe Ruth effected in baseball. After Ruth, baseball games were won and lost by new means, and the skills required to be successful at baseball were completely different. A skilled baserunner was useless if most runs were driven in off homeruns. The change Ruth made wasn’t engendered by him being able to bunt or steal more effectively than, say, Ty Cobb (widely acknowledged as the best player of the “small ball” era) it was engendered by making bunting and stealing irrelevant, by doing something completely new.

In the same way, I don’t think technologies that simply help us do what we’ve always done, but better and more efficiently, should be counted as game-changing. Innovation isn’t enough. Something that helps us write a traditional journal article more expertly or answer an existing question more satisfactorily isn’t to me a game-changing development. When you use Zotero to organize your research, or even when you use sophisticated text mining techniques to answer a question that you could have answered (though possibly less compellingly) using other methods, or even when you use those techniques to answer questions that you couldn’t have answered but would like to have answered, that’s not to me game-changing. And when you write that research up and publish it in a print journal, or even online as an open access .pdf, or even as a rich multimedia visualization or Omeka exhibit, that to me looks like playing the existing game more expertly, not fundamentally changing the game itself.

These things may make excellent use of new technologies. But they do so to more or less the same ends: to critique or interpret a certain text or artifact or set of text or artifacts. Indeed, it is this act of criticism and interpretation that is central to our current vision of humanistic pursuit. It is what we mean when we talk about humanities. A journal article by other means isn’t a game changer. It is the very essence of the game we play.

If those things, so much of what we consider to be the work of digital humanities, don’t count as game changers, then what does count? In his new book, Reading Machines, Steve Ramsay argues that the promise of digital technologies for humanities scholarship is not so much to help us establish a new interpretation of a given text but to make and remake that text to produce meaning after meaning. Here Steve looks to the Oulipo or “workshop of potential literature” movement, which sought to use artificial constraints of time or meter or mathematics—such as replacing all the nouns in an existing text with other nouns according to a predefined constraint—to create “story-making machines,” as a model. He draws on Jerry McGann and Lisa Samuels’ notion of cultural criticism as “deformance,” a word that for Steve “usefully combines a number of terms, including ‘form,’ ‘deform,’ and ‘performance.'” For Ramsay digital humanists “neither worry that criticism is being naively mechanized, nor that algorithms are being pressed beyond their inability” but rather imagine “the artifacts of human culture as being radically transformed, reordered, disassembled, and reassembled” to produce new artifacts.

This rings true to me. Increasingly, our digital work is crossing the boundary that separates secondary source from primary source, that separates second-hand criticism from original creation. In this our work looks increasingly like art.

The notion of digital humanities as deformance or performance extends beyond what Steve calls “algorithmic criticism,” beyond the work of bringing computational processes to bear on humanities texts. Increasingly digital humanities work is being conceived as much as event as product or project. With the rise of social media and with its ethic of transparency, digital humanities is increasingly being done in public and experienced by its audiences in real time. Two recent CHNM projects, One Week | One Tool and Hacking the Academy, point in this direction.

An NEH-funded summer institute, One Week | One Tool set out to build a digital tool for humanities scholarship, from inception to launch, in one week. For one week in July 2010, CHNM brought together a group of twelve digital humanists of diverse disciplinary backgrounds and practical experience (Steve Ramsay among them) to build a new software application or service. The tool the group created, Anthologize, a WordPress plugin which allows bloggers to remix, rework, and publish their blog posts as an e-book, is currently in use by thousands of WordPress users.

At the outset, One Week | One Tool set out to prove three claims: 1) that learning by doing is an important and effective part of digital humanities training; 2) that the NEH summer institute can be adapted to accommodate practical digital humanities pedagogy; and 3) that digital humanities tools can be built more quickly and affordably than conventional wisdom would suggest. I think we succeeded in proving these claims. But as a project, I think One Week | One Tool showed something else, something unexpected.

One of the teams working on Anthologize during One Week | One Tool was an outreach team. We have found that outreach—or more crudely, marketing—is absolutely crucial to making open source tools successful. The One Week | One Tool outreach team made heavy use of Twitter, blogs, and other social media during the week Anthologize was built, and one of the strategies we employed was the Apple-style “unveil”—letting a user community know something is coming but not letting on as to what it will be. All twelve members of the One Week | One Tool crew—not only the outreach team, but the developers, designers, and project managers as well—joined in on this, live-Tweeting and live-blogging their work, but not letting on as to what they were building. This created a tremendous buzz around the work of the team in the digital humanities community and even among a broader audience (articles about One Week | One Tool turned up in The Atlantic, ReadWriteWeb, and the Chronicle of Higher Education). More interestingly, these broader communities joined in the discussion, inspired the team at CHNM to work harder to produce a tool (actually put the fear of God in them), and ultimately influenced the design and distribution of the tool. It was, as Tim Carmody, now of Wired Magazine put it, representative of a new kind of “generative web event.”

Quoting his colleague, Robin Sloan, Tim lists the essential features of the generative web event:

Live. It’s an event that hap­pens at a spe­cific time and place in the real world. It’s some­thing you can buy a ticket for—or fol­low on Twitter.

Gen­er­a­tive. Some­thing new gets cre­ated. The event doesn’t have to pro­duce a series of lumi­nous photo essays; the point is sim­ply that con­trib­u­tors aren’t oper­at­ing in play­back mode. They’re think­ing on their feet, col­lab­o­rat­ing on their feet, creat­ing on their feet. There’s risk involved! And that’s one of the most com­pelling rea­sons to fol­low along.

Pub­lish­able. The result of all that gen­er­a­tion ought, ide­ally, to be some­thing you can pub­lish on the web, some­thing that peo­ple can hap­pily dis­cover two weeks or two years after the event is over.

Per­for­ma­tive. The event has an audience—either live or online, and ide­ally both. The event’s struc­ture and prod­ucts are carefully con­sid­ered and well-crafted. I love the Bar­Camp model; this is not a BarCamp.

Ser­ial. It doesn’t just hap­pen once, and it doesn’t just hap­pen once a year. Ide­ally it hap­penn… what? Once a month? It’s a pat­tern: you focus sharply on the event, but then the media that you pro­duce flares out onto the web to grow your audi­ence and pull them in—to focus on the next event. Focus, flare.

To this list I would add a sixth item, which follows from all of the above, and is perhaps obvious, but which I think we should make explicit. Generative web events are collaborative.

CHNM’s Hacking the Academy project is another example from digital humanities of this kind of generative web event. On May 21, 2010, Dan Cohen and I put out a call for “papers” for a collectively produced volume that would explore how the academy might be reformed using digital media and technology. We gave potential contributors only seven days to respond, and during this time we received more than 300 submissions from nearly 200 authors.

Turning this into the “book” that eventually became Hacking the Academy would take considerably longer than a week. The huge response presented us with a problem, one that required us to rethink our assumptions about the nature of authorship and editing and the relationship between the two. Reading through the submissions, some as long as 10,000 words, others as short as 140 characters, we struggled with how to accommodate such a diversity of forms and voices. Our key breakthrough came when we realized we had to let the writing dictate the form of the book rather than the opposite. We established three formal buckets (“feature essays,” “coversations,” and “voices”) and three topical buckets (“scholarship,” “teaching,” and “institutions”) into which we would fit the very best submissions. Some of the good longer pieces could stand on their own, relatively unedited, as features. But in most cases, we had to give ourselves permission to be almost ruthless in the editing (at least when compared to long accepted notions of authorial versus editorial prerogative in academic writing) so that submissions would fit into the formal and intellectual spaces we created. Long or short, formal or informal, we let the best writing rise to the top, selecting contributions (either entire pieces or very often just a particularly compelling paragraph) that could be juxtaposed or contraposed or placed in conversation with one another to best effect.

In the end, the “book” exists in several forms. There is the “raw” index of every submission. There is our 150-odd-page remix of this material, containing more approximately 40 articles from more than 60 authors, which is being published online and in print by the University of Michigan’s MPublishing division and Digital Culture Books imprint. Then, and I think most interestingly, there are third-party remixes, including one by Mark Sample re-titled Hacking the Accident.

Appropriately, Hacking the Accident is itself a performance of sorts. Using the classic Oulipo technique of N+7, in which the author replaces every noun in a text with the noun seven dictionary entries ahead of it, Mark has created a new work, not of humanities scholarship, but of literature, or poetry, or theater, or something else altogether.

These are just two examples, two with which I am particularly familar, of what we might call “performative humanities.” There are others: most significantly, the lively performative exchanges that play out in the digital humanities Twittersphere every day. I wouldn’t go so far to say performance is the future of humanities in general or even digital humanities in particular. But I do think the generative web event is one example of a game-changing development. Performance is a different ball game than publication. The things required to make a successful performance are very different from the things required to make a successful text. It requires different skills, different labor arrangements, way more collaboration, and different economies than traditional humanities research.

We can look to new tools and new research findings, but I think we will only know for sure that digital humanities has changed the game when what it takes to succeed in the humanities has changed. We will know the game has change when bunting and base-running aren’t working any more, and a new kind of player with a new set of skills comes to dominate the field of play.

[Image credit: Wikipedia]

[This post is based on a talk I gave on February 13, 2012 at Brown University in Providence, Rhode Island. Many thanks to Michael Satlow for the kind invitation, generous hospitality, and an excellent two-day workshop.]

Connecticut Forum on Digital Initiatives

Today, I’ll be speaking at the Connecticut Forum on Digital Initiatives at the Connecticut State Library under the catch-all title, “The Roy Rosenzweig Center for History and New Media: New initiatives, oldies but goodies, and partnership opportunities with ‘CHNM North’.” The long and short of it is that the institutional realities of being a grant-funded organization and the imperatives of the Web have meant that CHNM has from the beginning been a dynamic and entrepreneurial organization that’s always, always looking for new opportunities, new partners, new collaborations.

Among the projects I’ll point to are:

Partners wanted.

Post-Doc at CHNM (North)

Many Found History readers will know that I have recently moved full-time to Connecticut, working remotely and traveling to Fairfax four or five days each month to meet with the gang at CHNM. Since moving north, I have been lucky to make a slew of new friends and colleagues in the bustling New England public history and digital humanities communities. Several new collaborations are percolating, and CHNM is now seeking a post-doc to help lead one of these, with the Connecticut Humanities Council.

Here’s the advertizement. I hope to see your application.

Postdoctoral Research Fellow

The George Mason University, Roy Rosenzweig Center for History and New Media (CHNM) within the Department of History and Art History is seeking a full-time Postdoctoral Research Fellow.

The Postdoctoral Research Fellow will work closely with CHNM’s Managing Director and colleagues at the Connecticut Humanities Council (CHC) on a new collaboration to create a central online resource for Connecticut state history. Based in Middletown, CT., near the campus of Wesleyan University, the Postdoctoral Research Fellow will provide primary project leadership, produce extensive historical content, and manage staff in close coordination with colleagues at CHNM and CHC. This is a unique opportunity to make a substantive leadership contribution to an innovative, high-visibility online resource in a relaxed but performance-centered environment with a team of humanists, designers and developers working at the cutting edge of digital humanities.

We are looking for someone who has earned a doctoral degree in history or a closely related field, and hands-on experience in digital humanities work. Priority will be given to candidates who have a track-record of conceiving, managing and completing Web-based and other public humanities projects. Writing experience for a production-oriented publication (e.g., a blog or newspaper) and familiarity with Wikipedia community norms and practices are preferred.

For full consideration, applicants must apply online at for position number F8860z; complete the faculty application; and upload a cover letter, curriculum vitae, and a list of three references with their contact information. We will begin considering applications on 9/13/11.

CHNM is the leading producer of open source tools for humanists and historical content on the Web (e.g.,,, and Each year CHNM’s award-winning project Web sites receive over 16 million visitors and over a million people relay on its digital tools to teach, learn and conduct research.

CHC is a public foundation incorporated in 1973 as a state-based affiliate of the National Endowment for the Humanities. CHC produces and funds public humanities programs that bring together people of different viewpoints, ages and backgrounds to explore issues of vital concern, share new ideas and perspectives, and experience the cultural richness around them.


pressforward We have been threatening to do it for years. Frustrated with the inadequacies of traditional modes of scholarly publishing for the digital age, we have long batted around the idea of launching a “CHNM Press.” Today, we are pleased to announce the launch of PressForward, a new initiative to explore and produce new means for collecting, screening, and drawing attention to the vast expanse of scholarship that is currently decentralized across the web or does not fit into traditional genres such as the journal article or the monograph. In recent years, on sites like Slashdot, Techmeme, and Google News, the web beyond academia has developed sophisticated mechanisms for filtering for quantity. Over centuries, the academy has honed a set of methods of filtering for quality, through peer review. PressForward aims to marry these old and new methods to expose and disseminate the very best in online scholarship. We are pleased to add PressForward to our stable of projects (including THATCamp and Hacking the Academy) that are re-imagining scholarly communication for the Internet age and grateful to the Alfred P. Sloan Foundation’s Digital Information Technology program for making this exciting new adventure possible.

Learn more about PressForward on our new site, or by sending us an email. You can also follow us on Twitter or via RSS.

Summer Blockbusters: Sci-fi and Alternate History

It seems the past has replaced the future as Hollywood’s preferred setting for summer’s science fiction blockbusters. Jon Favreau’s screen adaptation of the graphic novel, Cowboys and Aliens imagines an extraterrestrial invasion of the Old West. X-Men: First Class offers a prequel to the popular franchise, tracing Magneto and Charles Xavier’s education and upbringing and (of course) crucial involvement in the Cuban Missile Crisis.

[Image credit: Wikipedia]

For Your Listening Pleasure: History Conversations

A few years back I had the bright idea to launch a second podcast (Digital Campus being the first). It languished. In fact, I only ever managed to record three episodes. The last one was recorded in February 2008.

It’s time to retire the website, but I don’t want to lose what I believe is some valuable content, especially the conversation I had with friends shortly after Roy’s death. So, here it is. The entire run of History Conversations, “an occasional dialogue with historians and history lovers about their interests, their ideas, and their lives in history,” in a single post.

Hello, World

In this pre-inaugural episode of History Conversations, Tom tests out his software and explains a little of the rationale behind the show. Join us in a couple weeks for our first conversation.

Running time: 4:41
Download the .mp3

Episode 1 – Peter Liebhold

Tom kicks off the podcast with a conversation with Peter Liebhold, Chair and Curator of the Division of Work and Industry at the Smithsonian’s National Museum of American History. Tom asks Peter about his daily work at the Museum, his straight and not-so-straight road into history, and the role of public history … and pledges not to go another four months between episodes.

Running time: 29:29
Download the .mp3

Episode 2 – Roy Rosenzweig, In Memoriam

In Episode 2 we remember Roy Rosenzweig, friend, colleague and pioneer in all manner of public history. Guests Mike O’Malley (co-founder of the Center for History and New Media and Associate Professor of History at George Mason University), Steve Brier (Vice President for Information Technology and External Programs at the CUNY Graduate Center and co-founder the American Social History Project), and Josh Brown (Executive Director of the American Social History Project and Professor of History in the Ph.D. program at the CUNY Graduate Center) join Tom for a conversation about Roy’s life, work, and long commitment to democratizing history.

Running time: 32:22
Download the .mp3

Episode 3 – A Look Back at Braddock

This month the volunteer historians of the Look Back at Braddock project join Tom for a conversation about the challenges and opportunities posed by local history. Located near the center of Fairfax County, Virginia, Braddock District has changed rapidly in the 20th century, and members of the community have taken it upon themselves to document the changes. Working largely without funding, John Browne, Mary Lipsey, Gil Donahue, and their colleagues have produced a rich oral history collection, a successful book, and a new website. What does it take for a group of committed amateurs to launch and sustain a multi-year history project and what keeps them going? Find out here in Episode 3 of History Conversations.

Running time: 31:42
Download the .mp3