Why Digital Humanities is “Nice”

emoticon One of the things that people often notice when they enter the field of digital humanities is how nice everybody is. This can be in stark contrast to other (unnamed) disciplines where suspicion, envy, and territoriality sometimes seem to rule. By contrast, our most commonly used bywords are “collegiality,” “openness,” and “collaboration.” We welcome new practitioners easily and we don’t seem to get in lots of fights. We’re the Golden Retrievers of the academy. (OK. It’s not always all balloons and cotton candy, but most practitioners will agree that the tone and tenor of digital humanities is conspicuously amiable when compared to many, if not most, academic communities.)

There are several reasons for this. Certainly the fact that nearly all digital humanities is collaborative accounts for much of its congeniality—you have to get along to get anything accomplished. The fact that digital humanities is still young, small, vulnerable, and requiring of solidarity also counts for something.

But I have another theory: Digital humanities is nice because we’re often more concerned with method than we are with theory. Why should a focus on method make us nice? Because methodological debates are often more easily resolved than theoretical ones. Critics approaching an issue with sharply opposed theories may argue endlessly over evidence and interpretation. Practitioners facing a methodological problem may likewise argue over which tool or method to use. Yet at some point in most methodological debates one of two things happens: either one method or another wins out empirically or the practical needs of our projects require us simply to pick one and move on. Moreover, as my CHNM colleague Sean Takats pointed out to me today, the methodological focus makes it easy for us to “call bullshit.” If anyone takes an argument too far afield, the community of practitioners can always put the argument to rest by asking to see some working code, a useable standard, or some other tangible result. In each case, the focus on method means that arguments are short.

And digital humanities stays nice.

THATCamp Groundrules

After giving my “groundrules” speech for a third THATCamp on Saturday, I realized I hadn’t published it anywhere for broader dissemination and possible reuse by the THATCamp community.

So here they are, THATCamp’s three groundrules:

  1. THATCamp is FUN – That means no reading papers, no powerpoint presentations, no extended project demos, and especially no grandstanding.
  2. THATCamp is PRODUCTIVE – Following from the no papers rule, we’re not here to listen and be listened to. We’re here to work, to participate actively. It is our sincere hope that you use today to solve a problem, start a new project, reinvigorate an old one, write some code, write a blog post, cure your writer’s block, forge a new collaboration, or whatever else stands for real results by your definition. We are here to get stuff done.
  3. Most of all, THATCamp is COLLEGIAL – Everyone should feel equally free to participate and everyone should let everyone else feel equally free to participate. You are not students and professors, management and staff here at THATCamp. At most conferences, the game we play is one in which I, the speaker, try desperately to prove to you how smart I am, and you, the audience member, tries desperately in the question and answer period to show how stupid I am by comparison. Not here. At THATCamp we’re here to be supportive of one another as we all struggle with the challenges and opportunities of incorporating technology in our work, departments, disciplines, and humanist missions. So no nitpicking, no tweckling, no petty BS.

Where's the Beef? Does Digital Humanities Have to Answer Questions?

The criticism most frequently leveled at digital humanities is what I like to call the “Where’s the beef?” question, that is, what questions does digital humanities answer that can’t be answered without it? What humanities arguments does digital humanities make?

Concern over the apparent lack of argument in digital humanities comes not only from outside our young discipline. Many practicing digital humanists are concerned about it as well. Rob Nelson of the University of Richmond’s Digital Scholarship Lab, an accomplished digital humanist, recently ruminated in his THATCamp session proposal, “While there have been some projects that have been developed to present arguments, they are few, and for the most part I sense that they haven’t had a substantial impact among academics, at least in the field of history.” A recent post on the Humanist listserv expresses one digital humanist’s “dream” of “a way of interpreting with computing that would allow arguments, real arguments, to be conducted at the micro-level and their consequences made in effect instantly visible at the macro-level.”

These concerns are justified. Does digital humanities have to help answer questions and make arguments? Yes. Of course. That’s what humanities is all about. Is it answering lots of questions currently? Probably not really. Hence the reason for worry.

But this suggests another, more difficult, more nuanced question: When? When does digital humanities have to produce new arguments? Does it have to produce new arguments now? Does it have to answer questions yet?


 

In 1703 the great instrument maker, mathematician, and experimenter, Robert Hooke died, vacating the suggestively named position he occupied for more than forty years, Curator of Experiments to the Royal Society. In this role, it was Hooke’s job to prepare public demonstrations of scientific phenomena for the Fellows’ meetings. Among Hooke’s standbys in these scientific performances were animal dissections, demonstrations of the air pump (made famous by Robert Boyle but made by Hooke), and viewings of pre-prepared microscope slides. Part research, part ice breaker, and part theater, one important function of these performances was to entertain the wealthier Fellows of the Society, many of whom were chosen for election more for their patronage than their scientific achievements.

Hauksbee's Electrical Machine Upon Hooke’s death the position of Curator of Experiments passed to Francis Hauksbee, who continued Hooke’s program of public demonstrations. Many of Hauksbee’s demonstrations involved the “electrical machine,” essentially an evacuated glass globe which was turned on an axle and to which friction (a hand, a cloth, a piece of fur) was applied to produce a static electrical charge. Invented some years earlier, Hauksbee greatly improved the device to produce ever greater charges. Perhaps his most important improvement was the addition to the globe of a small amount of mercury, which produced a glow when the machine was fired up. In an age of candlelight and on a continent of long, dark winters, the creation of a new source of artificial light was sensational and became a popular learned entertainment, not only in meetings of early scientific societies but in aristocratic parlors across Europe. Hauksbee’s machine also set off an explosion of electrical instrument making, experimentation, and descriptive work in the first half of the 18th century by the likes of Stephen Gray, John Desaguliers, and Pieter van Musschenbroek.

And yet not until later in the 18th century and early in the 19th century did Franklin, Coulomb, Volta, and ultimately Faraday provide adequate theoretical and mathematical answers to the questions of electricity raised by the electrical machine and the phenomena it produced. Only after decades of tool building, experimentation, and description were the tools sufficiently articulated and phenomena sufficiently described for theoretical arguments to be fruitfully made.*


 

There’s a moral to this story. One of the things digital humanities shares with the sciences is a heavy reliance on instruments, on tools. Sometimes new tools are built to answer pre-existing questions. Sometimes, as in the case of Hauksbee’s electrical machine, new questions and answers are the byproduct of the creation of new tools. Sometimes it takes a while, in which meantime tools themselves and the whiz-bang effects they produce must be the focus of scholarly attention.

Eventually digital humanities must make arguments. It has to answer questions. But yet? Like 18th century natural philosophers confronted with a deluge of strange new tools like microscopes, air pumps, and electrical machines, maybe we need time to articulate our digital apparatus, to produce new phenomena that we can neither anticipate nor explain immediately. At the very least, we need to make room for both kinds of digital humanities, the kind that seeks to make arguments and answer questions now and the kind that builds tools and resources with questions in mind, but only in the back of its mind and only for later. We need time to experiment and even—as we discussed recently with Bill Turkel and Kevin Kee on Digital Campus—time to play.

The 18th century electrical machine was a parlor trick. Until it wasn’t.

 

* For more on Hooke, see J.A. Bennett, et al., London’s Leonardo : The Life and Work of Robert Hooke (Oxford, 2003). For Hauksbee and the electrical machine see W.D. Hackmann, Electricity from glass : The History of the Frictional Electrical Machine, 1600-1850 (Alphen aan den Rijn, 1978) and Terje Brundtland, “From Medicine to Natural Philosophy: Francis Hauksbee’s Way to the Air-Pump,” The British Journal for the History of Science (June, 2008), pp. 209-240. For 18th century electricity in general J.L. Heilbron, Electricity in the 17th and 18th Centuries : A Study of Early Modern Physics (Berkeley, 1979) is still the standard. Image of Hauksbee’s Electrical Machine via Wikimedia Commons.

Rethinking Access

[This week and next I’ll be facilitating the discussion of “Learning & Information” at the IMLS UpNext: Future of Museums and Libraries wiki. The following is adapted from the first open thread. Please leave any comments at UpNext to join in the wider discussion!]

In addition to the questions posted on the main page for this theme—I will be starting threads for each of those over the course of the next two weeks—something that has been on my mind lately is the question, “What is access?”

Over the past ten or fifteen years, libraries and museums have made great strides in putting collections online. That is an achievement in itself. But beyond a good search and usable interfaces, what responsibilities do museums and libraries have to their online visitors to contextualize those materials, to interpret them, to scaffold them appropriately for scholarly, classroom, and general use?

My personal feeling is that our definition of what constitutes “access” has been too narrow, that real access has to mean more than the broad availability of digitized collections. Rather, in my vision, true access to library and museum resources must include access to the expertise and expert knowledge that undergirds and defines our collections. This is not to say that museum and library websites don’t provide that broader kind of access; they often do. It’s just to say that the two functions are usually performed separately: first comes database access to collections material, then comes (sometimes yes, sometimes no, often depending on available funding) contextual and interpretive access.

What I’d like to see in the future—funders take note!—is a more inclusive definition of access that incorporates both things (what I’m calling database access and contextual access) from the beginning. So, in my brave new world, as a matter of course, every “access” project funded by agencies like IMLS would include support both for mounting collections online and for interpretive exhibits and other contextual and teaching resources. In this future, funding access equals funding interpretation and education.

Is this already happening? If so, how are museums and libraries treating access more broadly? If not, what problems do you see with my vision?

[Please leave comments at UpNext.]

"Soft" [money] is not a four-letter word

I will be the first to say that I have been, and continue to be, extremely lucky. As I explained in an earlier post, I have managed to strike a workable employment model somewhere between tenured professor and transient post-doc, expendable adjunct, or subservient staffer, a more or less happy “third way” that provides relative security, creative opportunity, and professional respect. The terms of my employment at the Center for History and New Media (CHNM) may not be reproducible everywhere. Nor do I see my situation as any kind of silver bullet. But it is one model that has seemed to work in a particular institutional and research context, and I offer it mainly to show that fairness doesn’t necessarily come in the form of tenure and that other models are possible.

Taking this argument further, I would also argue that fairness does not necessarily come in the form of what we in the educational and cultural sectors tend to call “hard money,” i.e. positions that are written into in our institutions’ annual budgets.

Of course, the first thing to admit about “hard money” is that it doesn’t really exist. As we have seen in the recent financial crisis, especially in layoffs of tenure-track and even tenured faculty and in the elimination of boat-loads of hard lines in library and museum budgets, hard money is only hard until someone higher up than a department chair, dean, or provost decides that it’s soft.

The second thing to acknowledge is that the concept of “hard” versus “soft” money really only exists in academe. If those terms were extended to the rest of the U.S. economy—the 90+ percent of the U.S. labor force not employed by institutions of higher education (although government may be another place where this distinction is meaningful)—we’d see that most people are on “soft” money. My wife has been employed as lawyer at a fancy “K Street” law firm in Washington, DC for going on six years now. She makes a very good living and is, by the standards of her chosen profession, very successful. And yet, you guessed it, she is on soft money. If for some reason the firm looses two, three, four of its large clients, her billing and hence the money to pay her salary will very quickly dry up, and the powers that be will be forced to eliminate her position. This is true for almost any job you can point to. If revenues do not match projections, layoffs occur. One can debate the justice of particular layoffs and down-sizings, but without wholesale changes to our economy, the basic rule of “no money in, no money out” is hard to deny.

Indulge me for a moment in a bit of simile. In some ways, CHNM is very much like any other business. At CHNM we have clients. Those clients are our funders. We sell products and services to those clients. Those products and services are called digital humanities projects. Our funder clients pay us a negotiated price for those products and services. We use those revenues to pay the employees who produce the products and services for our clients. To keep the wheels turning, we sell more products and services to our clients, and if an existing client doesn’t want or need what we’re selling anymore, we either find new clients or change the range of products and services we offer. Failing that, we will have to start reducing payroll.

How is this situation any different or worse than any other sector of the economy? If people stop buying Word and Excel, Microsoft will have to find something else to sell people or layoff the engineers, designers, project managers and other staff that make MS Office.

I understand that so crass an analogy to corporate America will make many people unhappy. The idealist in me recoils from the notion that the academy should be treated as just another business. Yet the pragmatist in me—a side that is certainly stronger than it would otherwise be from dealing for so long with the often very practical, hands-on work of digital humanities and the frequent sleepless nights that come with the responsibility of managing a budget that supports nearly fifty employees—thinks it foolish to reject out of hand employment models that, however imperfect, have worked to produce so much and provide livelihoods for so many. (Indeed, the democrat in me also has to ask, what makes us in academe so special as to deserve and expect freedoms, security, and privileges that the rest of the labor force doesn’t?)

Therefore, in my book, “soft money” isn’t necessarily and always bad. If it funds good, relatively secure, fairly compensated jobs, in my book soft money is OK. CHNM has several senior positions funded entirely on soft money and several employees who have been with us on soft money for five, six, and seven years—a long time in the short history of digital humanities.

What isn’t OK is when “soft” equals “temporary” or “term.” This, I readily acknowledge, is an all too frequent equation. Many, if not most, soft money post-doc, research faculty, and staff positions are created upon the award of a particular grant to work on that grant and that grant alone, and only until the term of the grant expires. I make no bones that these defined-term, grant-specific jobs are inferior to tenure or tenure-track or even corporate-sector employment.

At CHNM we try to avoid creating these kinds of jobs. Since at least 2004, instead of hiring post-docs or temporary staff to work on a particular grant funded project when it is awarded, where possible we try to hire people to fill set of generalized roles that have evolved over the years and proven themselves necessary to the successful completion of nearly any digital humanities project: designer, web developer, project manager, outreach specialist. Generally our people are not paid from one grant, but rather from many grants. At any given moment, a CHNM web designer, for example, may be paid from as many as four or five different grant budgets, her funding distribution changing fairly frequently as her work on a particular project ends and work on another project begins. This makes for very complicated accounting and lots of strategic human resource decisions (this is one of the big headaches of my job), but it means that we can keep people around as projects start and end and funders come and go. Indeed as the funding mosaic becomes ever more complex, when viewed from a distance (i.e. by anyone but me and a few other administrative staff who deal with the daily nitty-gritty) the budget picture begins to look very much like a general fund and staff positions begin to look like budget lines.

Perceptive readers will by now be asking, “Yes, but how did CHNM get to the point where it had enough grants and had diversified its funding enough to maintain what amounts to a permanent staff?” and I’ll readily admit there is a chicken-and-egg problem here. But how CHNM got to where it is today is a topic for another day. The point I’d like to make today is simply that—if we can get beyond thinking about project funding—soft money isn’t essentially bad for either the people funded by it or the institution that relies on it. On the contrary, it can be harnessed toward the sustainable maintenance of an agile, innovation centered organization. While the pressure of constantly finding funding can be stressful and a drag, it doesn’t have to mean bad jobs and a crippled institution.

Just the opposite, in fact. Not only does CHNM’s diversified soft money offer its people some relative security in their employment, pooling our diversified grant resources to create staff stablity also makes it easier for us to bring in additional revenue. Having people in generalized roles already on our payroll allows us to respond with confidence and speed as new funding opportunities present themselves. That is, our financial structure has enabled us to build the institutional capacity to take advantage of new funding sources, to be confident that we can do the work in question, to convince funders that is so, and in turn to continue to maintain staff positions and further increase capacity.

CHNM is by no means perfect. Not all jobs at CHNM are created equal, and like everyone in the digital humanities we struggle to make ends meet and keep the engine going. In a time of increasingly intense competition for fewer and fewer grant dollars, there is always a distinct chance that we’ll run out of gas. Nevertheless, it is soft money that so far has created a virtuous and, dare I say, sustainable cycle.

Thus, when we talk about soft money, we have to talk about what kind of soft money and how it is structured and spent within an institution. Is it structured to hire short term post-docs and temporary staff who will be let go at the end of the grant? Or is it structured and diversified in such a way as to provide good, relatively stable jobs where staff can build skills and reputation over a period of several years?

When soft money means temporary and insecure, soft money is bad. When soft money facilitates the creation of good jobs in digital humanities, in my book at least, soft money is OK.

[Note: This post is part of a draft of a longer article that will appear in a forthcoming collection to be edited by Bethany Nowviskie on alternative careers for humanities scholars.]

[Image credits: Denni Schnapp, identity chris is.]

3 Innovation Killers in Digital Humanities

Here’s a list of three questions one might overhear in a peer review panel for digital humanities funding, each of which can kill a project in its tracks:

  • Haven’t X, Y, and Z already done this? We shouldn’t be supporting duplication of effort.
  • Are all of the stakeholders on board? (Hat tip to @patrickgmj for this gem.)
  • What about sustainability?

In their right place, each of these are valid criticisms. But they shouldn’t be levied reflexively. Sometimes X, Y, and Z’s project stinks, or nobody uses it, or their code is lousy. Sometimes stakeholders can’t see through the fog of current practice and imagine the possible fruits of innovation. Sometimes experimental projects can’t be sustained. Sometimes they fail altogether.

If we are going to advance a field as young as digital humanities, if we are going to encourage innovation, if we are going to lift the bar, we sometimes have to be ready to accept “I don’t know, this is an experiment” as a valid answer to the sustainability question in our grant guidelines. We are sometimes going to have to accept duplication of effort (aren’t we glad someone kept experimenting with email and the 1997 version of Hotmail wasn’t the first and last word in webmail?) And true innovation won’t always garner broad support among stakeholders, especially at the outset.

Duplication of effort, stakeholder buy in, and sustainability are all important issues, but they’re not all important. Innovation requires flexibility, an acceptance of risk, and a measure of trust. As Dorthea Salo said on Twitter, when considering sustainability, for example, we should be asking “‘how do we make this sustainable?’ rather than ‘kill it ‘cos we don’t know that it is.'” As Rachel Frick said in the same thread, in the case of experimental work we must accept that sustainability can “mean many things,” for example “document[ing] the risky action and results in an enduring way so that others may learn.”

Innovation makes some scary demands. Dorthea and Rachel present some thoughts on how to manage those demands with the other, legitimate demands of grant funding. We’re going to need some more creative thinking if we’re going to push the field forward.

Late update (10/16/09): Hugh Cayless at Scriptio Continua makes the very good, very practical point that “if you’re writing a proposal, assume these objections will be thrown at it, and do some prior thinking so you can spike them before they kill your innovative idea.” An ounce of prevention is worth a pound of cure … or something like that.

Thinking the Unthinkable

Clay Shirky’s widely circulated post, Newspapers and Thinking the Unthinkable, has got me thinking about the “unthinkable” in humanities scholarship. According to Shirky, in the world of print journalism, the unthinkable was the realization that newspapers would not be able to transfer their scarcity-of-information-based business model to the internet. It was publishers’ inability to imagine a business model for a world in which information is easily distributed that led to the crisis in which newspapers find themselves today. He writes,

The unthinkable scenario unfolded something like this: The ability to share content wouldn’t shrink, it would grow. Walled gardens would prove unpopular. Digital advertising would reduce inefficiencies, and therefore profits. Dislike of micropayments would prevent widespread use. People would resist being educated to act against their own desires. Old habits of advertisers and readers would not transfer online. Even ferocious litigation would be inadequate to constrain massive, sustained law-breaking. (Prohibition redux.) Hardware and software vendors would not regard copyright holders as allies, nor would they regard customers as enemies. DRM’s requirement that the attacker be allowed to decode the content would be an insuperable flaw. And, per Thompson, suing people who love something so much they want to share it would piss them off.

In our world, easy parallels to newspaper publishers can be made, for instance, with journal publishers or the purveyors of subscription research databases (indeed the three are often one and the same). I’m sure you can point to lots of others, and I’d be very happy to hear them in comments. But what interests me most in Shirky’s piece are his ideas about how the advent of the unthinkable divides a community of practitioners. These comments hit a little closer to home. Shirky writes,

Revolutions create a curious inversion of perception. In ordinary times, people who do no more than describe the world around them are seen as pragmatists, while those who imagine fabulous alternative futures are viewed as radicals. The last couple of decades haven’t been ordinary, however. Inside the papers, the pragmatists were the ones simply looking out the window and noticing that the real world was increasingly resembling the unthinkable scenario. These people were treated as if they were barking mad. Meanwhile the people spinning visions of popular walled gardens and enthusiastic micropayment adoption, visions unsupported by reality, were regarded not as charlatans but saviors.

When reality is labeled unthinkable, it creates a kind of sickness in an industry. Leadership becomes faith-based, while employees who have the temerity to suggest that what seems to be happening is in fact happening are herded into Innovation Departments, where they can be ignored en masse. This shunting aside of the realists in favor of the fabulists has different effects on different industries at different times. One of the effects on the newspapers is that many of their most passionate defenders are unable, even now, to plan for a world in which the industry they knew is visibly going away.

Again, we probably pretty easily can point to both “realists” (who get it) and “fabulists” (who don’t or won’t) in academic publishing. But the analogy extends deeper than that. There are strong and uncomfortable parallels within our own disciplines.

The question is this: Just who are the pragmatists and who are the radicals in our departments? Maybe those of us who spend our time taking digital technologies seriously aren’t radical at all. Maybe those of us in digital humanities centers (read: “Innovation Departments”) are simply realists, while our more traditional colleagues are fabulists, faithfully clinging to ways of doing things that are already past. Listening to some colleagues talk about the dangers of Wikipedia, for instance, or the primacy of university-press-published, single-authored monographs, or problems of authority in the social tagging of collections, it certainly sometimes feels that way. Conversely what we do in digital humanities surely feels pragmatic, both day-to-day and in our broader focus on method.

Obviously we can’t and shouldn’t divide scholars so neatly into two camps. Nor do I think we should so casually dismiss traditional scholarship any more than we should uncritically celebrate the digital. Yet it’s worth thinking for a minute of ourselves as realists rather than revolutionaries. If nothing else, it may keep us focused on the work at hand.

Brand Name Scholar

Scholars may not like it, but that doesn’t change the fact that in the 21st century’s fragmented media environment, marketing and branding are key to disseminating the knowledge and tools we produce. This is especially true in the field of digital humanities, where we are competing for attention not only with other humanists and other cultural institutions, but also with titans of the blogosphere and big-time technology firms. Indeed, CHNM spends quite a bit of energy on branding—logo design, search engine optimization, cool SWAG, blogs like this one—something we view as central to our success and our mission: to get history into as many hands possible. (CHNM’s actual mission statement reads, “Since 1994 under the founding direction of Roy Rosenzweig, CHNM has used digital media and computer technology to democratize history—to incorporate multiple voices, reach diverse audiences, and encourage popular participation in presenting and preserving the past.”)

In my experience, branding is mostly a game learned by trial and error, which is the only way to really understand what works for your target audience. But business school types also have some worthwhile advice. One good place to start is a two part series on “personal branding” from Mashable, which provides some easy advice for building a brand for your self or your projects. Another very valuable resource, which was just posted yesterday, is the Mozilla Community Marketing Guide. In it the team that managed to carve out a 20% market share from Microsoft for the open source web browser Firefox provides invaluable guidance not only on branding, but also on giving public presentations, using social networking, finding sponsorships, and dealing with the media that is widely transferable to marketing digital humanities and cultural heritage projects.

It may not be pretty, but in an internet of more than one trillion pages, helping your work stand out is no sin.

(Note: I’ll be leading a lunchtime discussion of these and other issues relating to electronic marketing and outreach for cultural heritage projects later today at the IMLS WebWise conference in Washington, D.C. I’ll be using #webwise on Twitter if you’d like to follow my updates from the conference.)

Making It Count: Toward a Third Way

Over the summer there was much discussion among my colleagues about making digital humanities work “count” in academic careers. This included two fantastic threads on Mills Kelly’s Edwired blog, a great post by Kathy Davidson, and an informal chat on our own Digital Campus podcast. As usual the topic of tenure also undergirded discussions at the various digital humanities workshops and conferences I attended during June, July, and August. The cooler weather and tempers of autumn having arrived, I’d like to take a quick look back and commit to writing some of the thoughts I offered on our podcast and at these meetings.

Let me use Mills’ “Making Digital Scholarship Count” series as a starting point. For those of you who weren’t following his posts, Mills argues that if scholars want digital scholarship to count in traditional promotion and tenure decisions, then they have to make sure it conforms to the characteristics and standards of traditional scholarship (though Mills points out that some of those standards, such as peer review, will have to be modified slightly to accommodate the differences inherent in digital scholarship.) At the same time Mills suggests that we have to accept that digital work that does not fit the standards of traditional scholarship, no matter how useful or well done, will not count in traditional promotion and tenure decisions. Essentially Mills makes a distinction between digital “scholarship” and other kinds of digital “work,” the first which bears the characteristics of traditional scholarship and the second which does not. The first should count as “scholarship” in promotion and tenure decisions. The second should not. Rather it should count as “service” or something similar.

I more or less agree this, and I’m fine with Mills’ distinction. Communities have the right to set their own standards and decide what counts as this or that. But this situation does raise questions for those of us engaged primarily in the second kind of activity, in digital humanities “work.” What happens to the increasing numbers of people employed inside university departments doing “work” not “scholarship?” In universities that have committed to digital humanities, shouldn’t the work of creating and maintaining digital collections, building software, experimenting with new user interface designs, mounting online exhibitions, providing digital resources for students and teachers, and managing the institutional teams upon which all digital humanities depend count for more than service does under traditional P&T rubrics? Personally I’m not willing to admit that this other kind of digital work is any less important for digital humanities than digital scholarship, which frankly would not be possible without it. All digital humanities is collaborative, and it’s not OK if the only people whose careers benefit from our collaborations are the “scholars” among us. We need the necessary “work” of digital humanities to count for those people whose jobs are to do it.

Now I’m not arguing we bestow tenure in the history department for web design or project management, even if it’s done by people with PhD’s. What I am saying is if we’re going to do digital humanities in our departments, then we need something new. It can’t be tenure-track or nothing. With the emergence of the new digital humanities, we need some new employment models.

I myself do relatively little work that would fit traditional definitions of scholarship. Practically none of my digital work would. Because of that I am more than willing to accept that tenure just isn’t in the picture for me. With my digital bent I am asking for a change in the nature of academic work, and therefore I have to be willing to accept a change in the nature and terms of my academic employment.

That said, I am not willing to accept the second-class status of, for instance, an adjunct faculty member. My work—whether it is “scholarship” or not—wins awards, attracts hundreds of thousands of dollars in grant funding, turns up periodically on CNN and in the New York Times, enables the work of hundreds of other academics, and is used every day by thousands of people, scholars and non-scholars alike. That may not make it tenureable, but it’s certainly not second class. My work requires a “third way.”

Fortunately I’m at an institution committed to digital humanities and willing to experiment with new models of academic employment. Technically I have two titles, “Managing Director of the Center for History & New Media” and “Research Assistant Professor.” That puts me somewhere between an untenured administrative faculty member and an untenured research faculty member. It is a position which would frighten some of my tenure-track colleagues terribly, and I can, indeed, be fired from my job. Sometimes that worries me too. Then I remember that probably 99% of the rest of working Americans can also be fired from their jobs. I also remember that just like that other 99%, if I do what’s expected of me, it probably won’t happen. If I continue to win grants and awards from panels of my peers and continue to produce quality, well-received, well-used digital humanities products, I’ll probably continue to have a job. If I exceed expectations, I’ll probably advance.

Just as important to note are the benefits my job has over more traditional scholarly career paths, some of which are pretty serious. I’m not terrorized by the formalized expectations that accompany traditional P&T decisions. I won’t perish if I don’t publish. I also don’t have fixed teaching obligations. I can focus full-time on my research, and I have greater freedom and flexibility to explore new directions than most of my tenure-track colleagues. I get to work on lots of things at once. Some of these experiments are likely to fail, but as long as most succeed, that’s expected and OK. I manage my own travel budgets and research schedule rather than being held hostage to department committees. I get to work every day with a close-knit team of like-minded academics rather than alone in a library. I have considerably greater freedom to negotiate my pay and benefits. And to the extent that it advances the mission and interests of the Center for History & New Media, this blog “counts.”

Mine is not a tenure-track position, and based on the work I do, I don’t expect it to be. Nor do I care. There are some downsides and some upsides to my position, but it’s a reasonably happy third way. More importantly, I believe it is a necessary third way for the digital humanities, which in Mills’ terms require not only digital “scholarship” but also digital “work.” I’m lucky to be at an institution and to have colleagues that make this third way possible. Other institutions looking to build digital humanities capacity should follow suit. If digital humanities are going to flourish in the academy, we need both to accept and advocate for new models of academic employment.

[Image credit: Dave Morris]

Late Update (10/2/08): I very absentmindedly neglected to list my friend Margie McLellan among the important voices in this discussion. Along with Mills and Kathy Davidson, Margie’s three posts, On Defining Scholarship, Scholarship Update, and Is a Blog Scholarship?, are required reading on these matters.

Thoughts on THATCamp

2539671619_45e0d02289.jpg Last week CHNM hosted the inaugural THATCamp to what seemed to me like great success. Short for “The Humanities and Technology Camp,” THATCamp is a BarCamp-style, user-generated “unconference” on digital humanities. Structurally, it differs from an ordinary conference in two ways: first in that its sessions are organized by participants themselves (ahead of time through a blog, but mainly on the day of the conference) rather than by a program committee, and second in that everyone is expected to participate actively—to present a project, share some skill, and collaborate with fellow participants. We first started thinking about THATCamp as many as two or three years ago, and I was thrilled to see it finally get off the ground, thanks in large part to the extraordinary efforts and energy of Jeremy Boggs and Dave Lester, who will be presenting their own thoughts on the matter in a forthcoming episode of THATPodcast.

To begin with let me say the sessions were fantastic. I particularly benefited from conversations on F/OSS design and development processes, event standards, and sustainability. Nevertheless I have to admit I was just as interested in the process of THATCamp as I was in its products. Throughout the weekend I was paying as much attention to how THATCamp worked as to the work that was actually done there. I’d like to share three observations in this regard:

  • First and foremost, I think it is very important to stress that THATCamp was cheap. The cost of the weekend was around $3000. Total. That included a fairly lavish breakfast and lunch buffet on both days, lots of caffenated drinks, t-shirts for everyone involved, pretty badges and lanyards, office supplies (post-its, pens), room fees, and a couple student travel stipends. Those modest costs were paid through a combination of sponsorships (the GMU provost’s office, NiCHE, NYPL, and CHNM’s own Zotero project) and voluntary donations from THATCamp participants (we suggested $20 and passed a hat around on the first day). Most participants had to fund their own travel, but still.
  • Second, THATCamp was honest. Mills has already pointed out how the unconference sessions at THATCamp were so much more engaging than the standard “panelist reads at you” conference session model. That’s certainly true. But it wasn’t just the format that made these discussions more useful. It was the attitude. At most scholarly conferences, everyone seems to have something to prove—specifically, how smart they are. We have all seen people shouted down at conferences and how destructive that can be, especially to a young scholar (I have seen people in tears). But at THATCamp, instead of trying to out-smart each other, campers came clean about their failures as well as their successes, their problems as well as their solutions. By admitting, rather than covering up, gaps in their knowledge, campers were able to learn from each other. This honesty made THATCamp truly productive.
  • Third, THATCamp was democratic. In large part because Jeremy and Dave (both students as well as kickass digital humanists) did most of the work, but also because of the transparency, informality, and openness of the process and discussions, professional status didn’t seem to count for much at THATCamp. Full professors, associate professors, assistant professors, research faculty, museum and library professionals from big and small institutions at all levels, and graduate students seemed to mix easily and casually. More than once I saw a student or young professional challenge a more senior colleague. Even more often I saw the groups laughing, chatting, sharing ideas. That’s good for everybody.

I’m not going to lie. THATCamp was a ton of work, and it wasn’t perfect by any means. I’m not sure, for instance, how many publications will result from the sessions. But I do think it was a truly different and useful way of forging new collaborations, building a community of practice, making connections to people with answers to your questions, supporting student work and thought, and solving practical problems. The model is particularly appropriate for a very hands-on discipline like digital humanities, but the three observations above suggest it should and could easily be extended to other, more traditional disciplines. Mills has already called on the American Historical Association to dedicate 5% of its program THATCamp-style activities, and Margie McLellan is hoping to encourage the Oral History Association to do the same. I’d also encourage humanities departments, graduate student committees, and other research institutions to try. We all lament the lack of community and collegiality in our profession and decry the cutthroat competitiveness in our fields. It seems to me that THATCamp is a cheap and easy antidote.

[Image: “Dork Shorts” session sign-up board, credit Dave Lester.]