New Wine in Old Skins: Why the CV needs hacking

Likewise, no one pours new wine into old wineskins. Otherwise, the wine will burst the skins, and both the wine and the skins are ruined. Rather, new wine is poured into fresh wineskins. – Mark 2:22

Since the time of my first foray into digital humanities as a newly minted graduate working on a project to catalog history museum websites (yes, in 1996 you could actually make a list of every history museum with a website, about 150 at the time), most discussions about careers in digital humanities have centered around questions of how to convince more traditional colleagues to accept digital work as scholarship, to make it “count” for tenure and promotion, that is, to make it fit into traditional structures of academic employment. This has been a hard sell because, as Mills has pointed out, the kind of work done by digital humanists, no matter how useful, interesting, and important, often just can’t be made to fit the traditional definitions of scholarship that are used to determine eligibility for academic career advancement. No amount of bending and squeezing and prodding and poking is going to help the new square pegs of digital humanities fit the old round holes used to assess traditional textual scholarship.

Having seen their older colleagues struggle though stages of denial, anger, bargaining, and depression, a new generation of digital humanists (some of us) is coming to accept this situation. Rather than fighting to have its work credited within the existing structures of academic career advancement, it has instead decided to alter those structures or replace them with ones that judge digital work on its own merits. This new generation is hacking the academy to create new structures more natively accepting of digital work. These new structures—as imperfect and tenuous as newly forked code—can be seen in the job descriptions and contract arrangements of many in the alt-ac crowd.

Yet however much we have hacked academic employment to better accommodate digital work, at least one structure has remained stubbornly intact: the CV or curriculum vitae. For the most part our CV’s look the same as our analog colleagues’. Should this be? Isn’t this pouring new wine into old wineskins? Aren’t we setting ourselves up for failure if we persist in marketing our digital achievements using a format designed to highlight analog achievements? The standard categories of education, awards, publications, and so on (see this fairly representative guide from MIT [.pdf]) sets us up for failure. If we are going to market our work effectively we need to come up with a new vehicle for the construction of professional identity.

There is nothing immutable about the CV. As far as I can tell from a few hours research, the CV in its current form emerged in the late 19th century and early 20th century, right around the time our modern disciplines were consolidating the academy. The OED dates the first use of “curriculum vitae” to mean “a brief account of one’s career” to the turn of the last century (“Anciently biography was more of a mere curriculum vitæ than it is now,” New Internat. Encycl. III. 21/2, 1902). The British term, “vita,” appears just about the same time. A quick search of the “help wanted” pages of a few major American newspapers yields a similar result for the first use of the term. A December 3, 1908 advertisement in The Washington Post asks:

HELP WANTED—MALE: IN A PATENT OFFICE—YOUNG GERMAN, HAVING passed schools in Germany; salary $30 to start, gradually increasing. Send curriculum vitae to G. DITTMAR, 702 Ninth st. nw.

Considering its importance in shaping the modern academy and constructing the modern notion of the scholar, there is little (very little, in fact; I couldn’t find anything) written on the CV. Yet even from this very cursory bit of research we can say one thing definitively: the CV is a social and historical construct. It hasn’t always existed, and it is not an essential ingredient for the successful creation and dissemination of scholarship. Erasmus didn’t have one, for example.

I’m ready to accept that the successful operation of the academy requires a vehicle, even a standardized vehicle, for constructing and communicating scholarly identity. But it doesn’t have to be, and hasn’t always been, the CV—certainly not the one we were told to write in grad school. The CV is a platform for constructing and communicating professional achievement and identity, and like any platform, it’s hackable.

So, I say we need, and can build, a new CV, or whatever you want to call it. But what does this new CV look like? Here are at least some of the criteria a new vision for the professional identity document should meet (I use the word “document” here simply as a shorthand, not to suggest the format or material existence this new thing should take):

  1. Its primary presentation should be digital. A print version of the document may exist, but it should be born digital to make best use of the special qualities of digital media, which undoubtedly will do a better job of representing digital work than the analog technologies of print. We should look to discussions around the notion of e-portfolios in the educational technology community for ideas.
  2. It should eschew the visual hierarchies that privilege print scholarship in the traditional CV. Specifically, the vertical orientation that inevitably puts digital work below analog work should be eliminated.
  3. It should adequately represent collaborative work. You should be able to put a collaborative product (a website, a software project, an exhibit) on your CV without diminishing your colleague’s contributions but also without feeling guilty about listing it under your name. We need a better way to represent group work.
  4. It should credit processes as well as products. Put another way, we need to elevate activities previously relegated to the category of “service” in our career presentations. Much of the real work of digital humanities involves project management, organization, partnership building, network building, curation, and mentoring, and these processes need to be credited accordingly. The development and implementation of new ways of working constitute significant achievements in digital humanities. New methods should be credited equally with new modalities of scholarship.
  5. It should be used. If digital humanists create these new documents, but persist in using their old paper CV’s to apply for jobs, it will be doomed to fail.

There are surely other criteria this new document should meet. Let’s brainstorm in comments and start helping help ourselves.

Why Digital Humanities is “Nice”

emoticon One of the things that people often notice when they enter the field of digital humanities is how nice everybody is. This can be in stark contrast to other (unnamed) disciplines where suspicion, envy, and territoriality sometimes seem to rule. By contrast, our most commonly used bywords are “collegiality,” “openness,” and “collaboration.” We welcome new practitioners easily and we don’t seem to get in lots of fights. We’re the Golden Retrievers of the academy. (OK. It’s not always all balloons and cotton candy, but most practitioners will agree that the tone and tenor of digital humanities is conspicuously amiable when compared to many, if not most, academic communities.)

There are several reasons for this. Certainly the fact that nearly all digital humanities is collaborative accounts for much of its congeniality—you have to get along to get anything accomplished. The fact that digital humanities is still young, small, vulnerable, and requiring of solidarity also counts for something.

But I have another theory: Digital humanities is nice because we’re often more concerned with method than we are with theory. Why should a focus on method make us nice? Because methodological debates are often more easily resolved than theoretical ones. Critics approaching an issue with sharply opposed theories may argue endlessly over evidence and interpretation. Practitioners facing a methodological problem may likewise argue over which tool or method to use. Yet at some point in most methodological debates one of two things happens: either one method or another wins out empirically or the practical needs of our projects require us simply to pick one and move on. Moreover, as my CHNM colleague Sean Takats pointed out to me today, the methodological focus makes it easy for us to “call bullshit.” If anyone takes an argument too far afield, the community of practitioners can always put the argument to rest by asking to see some working code, a useable standard, or some other tangible result. In each case, the focus on method means that arguments are short.

And digital humanities stays nice.

THATCamp Groundrules

After giving my “groundrules” speech for a third THATCamp on Saturday, I realized I hadn’t published it anywhere for broader dissemination and possible reuse by the THATCamp community.

So here they are, THATCamp’s three groundrules:

  1. THATCamp is FUN – That means no reading papers, no powerpoint presentations, no extended project demos, and especially no grandstanding.
  2. THATCamp is PRODUCTIVE – Following from the no papers rule, we’re not here to listen and be listened to. We’re here to work, to participate actively. It is our sincere hope that you use today to solve a problem, start a new project, reinvigorate an old one, write some code, write a blog post, cure your writer’s block, forge a new collaboration, or whatever else stands for real results by your definition. We are here to get stuff done.
  3. Most of all, THATCamp is COLLEGIAL – Everyone should feel equally free to participate and everyone should let everyone else feel equally free to participate. You are not students and professors, management and staff here at THATCamp. At most conferences, the game we play is one in which I, the speaker, try desperately to prove to you how smart I am, and you, the audience member, tries desperately in the question and answer period to show how stupid I am by comparison. Not here. At THATCamp we’re here to be supportive of one another as we all struggle with the challenges and opportunities of incorporating technology in our work, departments, disciplines, and humanist missions. So no nitpicking, no tweckling, no petty BS.

Where's the Beef? Does Digital Humanities Have to Answer Questions?

The criticism most frequently leveled at digital humanities is what I like to call the “Where’s the beef?” question, that is, what questions does digital humanities answer that can’t be answered without it? What humanities arguments does digital humanities make?

Concern over the apparent lack of argument in digital humanities comes not only from outside our young discipline. Many practicing digital humanists are concerned about it as well. Rob Nelson of the University of Richmond’s Digital Scholarship Lab, an accomplished digital humanist, recently ruminated in his THATCamp session proposal, “While there have been some projects that have been developed to present arguments, they are few, and for the most part I sense that they haven’t had a substantial impact among academics, at least in the field of history.” A recent post on the Humanist listserv expresses one digital humanist’s “dream” of “a way of interpreting with computing that would allow arguments, real arguments, to be conducted at the micro-level and their consequences made in effect instantly visible at the macro-level.”

These concerns are justified. Does digital humanities have to help answer questions and make arguments? Yes. Of course. That’s what humanities is all about. Is it answering lots of questions currently? Probably not really. Hence the reason for worry.

But this suggests another, more difficult, more nuanced question: When? When does digital humanities have to produce new arguments? Does it have to produce new arguments now? Does it have to answer questions yet?


 

In 1703 the great instrument maker, mathematician, and experimenter, Robert Hooke died, vacating the suggestively named position he occupied for more than forty years, Curator of Experiments to the Royal Society. In this role, it was Hooke’s job to prepare public demonstrations of scientific phenomena for the Fellows’ meetings. Among Hooke’s standbys in these scientific performances were animal dissections, demonstrations of the air pump (made famous by Robert Boyle but made by Hooke), and viewings of pre-prepared microscope slides. Part research, part ice breaker, and part theater, one important function of these performances was to entertain the wealthier Fellows of the Society, many of whom were chosen for election more for their patronage than their scientific achievements.

Hauksbee's Electrical Machine Upon Hooke’s death the position of Curator of Experiments passed to Francis Hauksbee, who continued Hooke’s program of public demonstrations. Many of Hauksbee’s demonstrations involved the “electrical machine,” essentially an evacuated glass globe which was turned on an axle and to which friction (a hand, a cloth, a piece of fur) was applied to produce a static electrical charge. Invented some years earlier, Hauksbee greatly improved the device to produce ever greater charges. Perhaps his most important improvement was the addition to the globe of a small amount of mercury, which produced a glow when the machine was fired up. In an age of candlelight and on a continent of long, dark winters, the creation of a new source of artificial light was sensational and became a popular learned entertainment, not only in meetings of early scientific societies but in aristocratic parlors across Europe. Hauksbee’s machine also set off an explosion of electrical instrument making, experimentation, and descriptive work in the first half of the 18th century by the likes of Stephen Gray, John Desaguliers, and Pieter van Musschenbroek.

And yet not until later in the 18th century and early in the 19th century did Franklin, Coulomb, Volta, and ultimately Faraday provide adequate theoretical and mathematical answers to the questions of electricity raised by the electrical machine and the phenomena it produced. Only after decades of tool building, experimentation, and description were the tools sufficiently articulated and phenomena sufficiently described for theoretical arguments to be fruitfully made.*


 

There’s a moral to this story. One of the things digital humanities shares with the sciences is a heavy reliance on instruments, on tools. Sometimes new tools are built to answer pre-existing questions. Sometimes, as in the case of Hauksbee’s electrical machine, new questions and answers are the byproduct of the creation of new tools. Sometimes it takes a while, in which meantime tools themselves and the whiz-bang effects they produce must be the focus of scholarly attention.

Eventually digital humanities must make arguments. It has to answer questions. But yet? Like 18th century natural philosophers confronted with a deluge of strange new tools like microscopes, air pumps, and electrical machines, maybe we need time to articulate our digital apparatus, to produce new phenomena that we can neither anticipate nor explain immediately. At the very least, we need to make room for both kinds of digital humanities, the kind that seeks to make arguments and answer questions now and the kind that builds tools and resources with questions in mind, but only in the back of its mind and only for later. We need time to experiment and even—as we discussed recently with Bill Turkel and Kevin Kee on Digital Campus—time to play.

The 18th century electrical machine was a parlor trick. Until it wasn’t.

 

* For more on Hooke, see J.A. Bennett, et al., London’s Leonardo : The Life and Work of Robert Hooke (Oxford, 2003). For Hauksbee and the electrical machine see W.D. Hackmann, Electricity from glass : The History of the Frictional Electrical Machine, 1600-1850 (Alphen aan den Rijn, 1978) and Terje Brundtland, “From Medicine to Natural Philosophy: Francis Hauksbee’s Way to the Air-Pump,” The British Journal for the History of Science (June, 2008), pp. 209-240. For 18th century electricity in general J.L. Heilbron, Electricity in the 17th and 18th Centuries : A Study of Early Modern Physics (Berkeley, 1979) is still the standard. Image of Hauksbee’s Electrical Machine via Wikimedia Commons.

Rethinking Access

[This week and next I’ll be facilitating the discussion of “Learning & Information” at the IMLS UpNext: Future of Museums and Libraries wiki. The following is adapted from the first open thread. Please leave any comments at UpNext to join in the wider discussion!]

In addition to the questions posted on the main page for this theme—I will be starting threads for each of those over the course of the next two weeks—something that has been on my mind lately is the question, “What is access?”

Over the past ten or fifteen years, libraries and museums have made great strides in putting collections online. That is an achievement in itself. But beyond a good search and usable interfaces, what responsibilities do museums and libraries have to their online visitors to contextualize those materials, to interpret them, to scaffold them appropriately for scholarly, classroom, and general use?

My personal feeling is that our definition of what constitutes “access” has been too narrow, that real access has to mean more than the broad availability of digitized collections. Rather, in my vision, true access to library and museum resources must include access to the expertise and expert knowledge that undergirds and defines our collections. This is not to say that museum and library websites don’t provide that broader kind of access; they often do. It’s just to say that the two functions are usually performed separately: first comes database access to collections material, then comes (sometimes yes, sometimes no, often depending on available funding) contextual and interpretive access.

What I’d like to see in the future—funders take note!—is a more inclusive definition of access that incorporates both things (what I’m calling database access and contextual access) from the beginning. So, in my brave new world, as a matter of course, every “access” project funded by agencies like IMLS would include support both for mounting collections online and for interpretive exhibits and other contextual and teaching resources. In this future, funding access equals funding interpretation and education.

Is this already happening? If so, how are museums and libraries treating access more broadly? If not, what problems do you see with my vision?

[Please leave comments at UpNext.]

"Soft" [money] is not a four-letter word

I will be the first to say that I have been, and continue to be, extremely lucky. As I explained in an earlier post, I have managed to strike a workable employment model somewhere between tenured professor and transient post-doc, expendable adjunct, or subservient staffer, a more or less happy “third way” that provides relative security, creative opportunity, and professional respect. The terms of my employment at the Center for History and New Media (CHNM) may not be reproducible everywhere. Nor do I see my situation as any kind of silver bullet. But it is one model that has seemed to work in a particular institutional and research context, and I offer it mainly to show that fairness doesn’t necessarily come in the form of tenure and that other models are possible.

Taking this argument further, I would also argue that fairness does not necessarily come in the form of what we in the educational and cultural sectors tend to call “hard money,” i.e. positions that are written into in our institutions’ annual budgets.

Of course, the first thing to admit about “hard money” is that it doesn’t really exist. As we have seen in the recent financial crisis, especially in layoffs of tenure-track and even tenured faculty and in the elimination of boat-loads of hard lines in library and museum budgets, hard money is only hard until someone higher up than a department chair, dean, or provost decides that it’s soft.

The second thing to acknowledge is that the concept of “hard” versus “soft” money really only exists in academe. If those terms were extended to the rest of the U.S. economy—the 90+ percent of the U.S. labor force not employed by institutions of higher education (although government may be another place where this distinction is meaningful)—we’d see that most people are on “soft” money. My wife has been employed as lawyer at a fancy “K Street” law firm in Washington, DC for going on six years now. She makes a very good living and is, by the standards of her chosen profession, very successful. And yet, you guessed it, she is on soft money. If for some reason the firm looses two, three, four of its large clients, her billing and hence the money to pay her salary will very quickly dry up, and the powers that be will be forced to eliminate her position. This is true for almost any job you can point to. If revenues do not match projections, layoffs occur. One can debate the justice of particular layoffs and down-sizings, but without wholesale changes to our economy, the basic rule of “no money in, no money out” is hard to deny.

Indulge me for a moment in a bit of simile. In some ways, CHNM is very much like any other business. At CHNM we have clients. Those clients are our funders. We sell products and services to those clients. Those products and services are called digital humanities projects. Our funder clients pay us a negotiated price for those products and services. We use those revenues to pay the employees who produce the products and services for our clients. To keep the wheels turning, we sell more products and services to our clients, and if an existing client doesn’t want or need what we’re selling anymore, we either find new clients or change the range of products and services we offer. Failing that, we will have to start reducing payroll.

How is this situation any different or worse than any other sector of the economy? If people stop buying Word and Excel, Microsoft will have to find something else to sell people or layoff the engineers, designers, project managers and other staff that make MS Office.

I understand that so crass an analogy to corporate America will make many people unhappy. The idealist in me recoils from the notion that the academy should be treated as just another business. Yet the pragmatist in me—a side that is certainly stronger than it would otherwise be from dealing for so long with the often very practical, hands-on work of digital humanities and the frequent sleepless nights that come with the responsibility of managing a budget that supports nearly fifty employees—thinks it foolish to reject out of hand employment models that, however imperfect, have worked to produce so much and provide livelihoods for so many. (Indeed, the democrat in me also has to ask, what makes us in academe so special as to deserve and expect freedoms, security, and privileges that the rest of the labor force doesn’t?)

Therefore, in my book, “soft money” isn’t necessarily and always bad. If it funds good, relatively secure, fairly compensated jobs, in my book soft money is OK. CHNM has several senior positions funded entirely on soft money and several employees who have been with us on soft money for five, six, and seven years—a long time in the short history of digital humanities.

What isn’t OK is when “soft” equals “temporary” or “term.” This, I readily acknowledge, is an all too frequent equation. Many, if not most, soft money post-doc, research faculty, and staff positions are created upon the award of a particular grant to work on that grant and that grant alone, and only until the term of the grant expires. I make no bones that these defined-term, grant-specific jobs are inferior to tenure or tenure-track or even corporate-sector employment.

At CHNM we try to avoid creating these kinds of jobs. Since at least 2004, instead of hiring post-docs or temporary staff to work on a particular grant funded project when it is awarded, where possible we try to hire people to fill set of generalized roles that have evolved over the years and proven themselves necessary to the successful completion of nearly any digital humanities project: designer, web developer, project manager, outreach specialist. Generally our people are not paid from one grant, but rather from many grants. At any given moment, a CHNM web designer, for example, may be paid from as many as four or five different grant budgets, her funding distribution changing fairly frequently as her work on a particular project ends and work on another project begins. This makes for very complicated accounting and lots of strategic human resource decisions (this is one of the big headaches of my job), but it means that we can keep people around as projects start and end and funders come and go. Indeed as the funding mosaic becomes ever more complex, when viewed from a distance (i.e. by anyone but me and a few other administrative staff who deal with the daily nitty-gritty) the budget picture begins to look very much like a general fund and staff positions begin to look like budget lines.

Perceptive readers will by now be asking, “Yes, but how did CHNM get to the point where it had enough grants and had diversified its funding enough to maintain what amounts to a permanent staff?” and I’ll readily admit there is a chicken-and-egg problem here. But how CHNM got to where it is today is a topic for another day. The point I’d like to make today is simply that—if we can get beyond thinking about project funding—soft money isn’t essentially bad for either the people funded by it or the institution that relies on it. On the contrary, it can be harnessed toward the sustainable maintenance of an agile, innovation centered organization. While the pressure of constantly finding funding can be stressful and a drag, it doesn’t have to mean bad jobs and a crippled institution.

Just the opposite, in fact. Not only does CHNM’s diversified soft money offer its people some relative security in their employment, pooling our diversified grant resources to create staff stablity also makes it easier for us to bring in additional revenue. Having people in generalized roles already on our payroll allows us to respond with confidence and speed as new funding opportunities present themselves. That is, our financial structure has enabled us to build the institutional capacity to take advantage of new funding sources, to be confident that we can do the work in question, to convince funders that is so, and in turn to continue to maintain staff positions and further increase capacity.

CHNM is by no means perfect. Not all jobs at CHNM are created equal, and like everyone in the digital humanities we struggle to make ends meet and keep the engine going. In a time of increasingly intense competition for fewer and fewer grant dollars, there is always a distinct chance that we’ll run out of gas. Nevertheless, it is soft money that so far has created a virtuous and, dare I say, sustainable cycle.

Thus, when we talk about soft money, we have to talk about what kind of soft money and how it is structured and spent within an institution. Is it structured to hire short term post-docs and temporary staff who will be let go at the end of the grant? Or is it structured and diversified in such a way as to provide good, relatively stable jobs where staff can build skills and reputation over a period of several years?

When soft money means temporary and insecure, soft money is bad. When soft money facilitates the creation of good jobs in digital humanities, in my book at least, soft money is OK.

[Note: This post is part of a draft of a longer article that will appear in a forthcoming collection to be edited by Bethany Nowviskie on alternative careers for humanities scholars.]

[Image credits: Denni Schnapp, identity chris is.]

3 Innovation Killers in Digital Humanities

Here’s a list of three questions one might overhear in a peer review panel for digital humanities funding, each of which can kill a project in its tracks:

  • Haven’t X, Y, and Z already done this? We shouldn’t be supporting duplication of effort.
  • Are all of the stakeholders on board? (Hat tip to @patrickgmj for this gem.)
  • What about sustainability?

In their right place, each of these are valid criticisms. But they shouldn’t be levied reflexively. Sometimes X, Y, and Z’s project stinks, or nobody uses it, or their code is lousy. Sometimes stakeholders can’t see through the fog of current practice and imagine the possible fruits of innovation. Sometimes experimental projects can’t be sustained. Sometimes they fail altogether.

If we are going to advance a field as young as digital humanities, if we are going to encourage innovation, if we are going to lift the bar, we sometimes have to be ready to accept “I don’t know, this is an experiment” as a valid answer to the sustainability question in our grant guidelines. We are sometimes going to have to accept duplication of effort (aren’t we glad someone kept experimenting with email and the 1997 version of Hotmail wasn’t the first and last word in webmail?) And true innovation won’t always garner broad support among stakeholders, especially at the outset.

Duplication of effort, stakeholder buy in, and sustainability are all important issues, but they’re not all important. Innovation requires flexibility, an acceptance of risk, and a measure of trust. As Dorthea Salo said on Twitter, when considering sustainability, for example, we should be asking “‘how do we make this sustainable?’ rather than ‘kill it ‘cos we don’t know that it is.'” As Rachel Frick said in the same thread, in the case of experimental work we must accept that sustainability can “mean many things,” for example “document[ing] the risky action and results in an enduring way so that others may learn.”

Innovation makes some scary demands. Dorthea and Rachel present some thoughts on how to manage those demands with the other, legitimate demands of grant funding. We’re going to need some more creative thinking if we’re going to push the field forward.

Late update (10/16/09): Hugh Cayless at Scriptio Continua makes the very good, very practical point that “if you’re writing a proposal, assume these objections will be thrown at it, and do some prior thinking so you can spike them before they kill your innovative idea.” An ounce of prevention is worth a pound of cure … or something like that.

Thinking the Unthinkable

Clay Shirky’s widely circulated post, Newspapers and Thinking the Unthinkable, has got me thinking about the “unthinkable” in humanities scholarship. According to Shirky, in the world of print journalism, the unthinkable was the realization that newspapers would not be able to transfer their scarcity-of-information-based business model to the internet. It was publishers’ inability to imagine a business model for a world in which information is easily distributed that led to the crisis in which newspapers find themselves today. He writes,

The unthinkable scenario unfolded something like this: The ability to share content wouldn’t shrink, it would grow. Walled gardens would prove unpopular. Digital advertising would reduce inefficiencies, and therefore profits. Dislike of micropayments would prevent widespread use. People would resist being educated to act against their own desires. Old habits of advertisers and readers would not transfer online. Even ferocious litigation would be inadequate to constrain massive, sustained law-breaking. (Prohibition redux.) Hardware and software vendors would not regard copyright holders as allies, nor would they regard customers as enemies. DRM’s requirement that the attacker be allowed to decode the content would be an insuperable flaw. And, per Thompson, suing people who love something so much they want to share it would piss them off.

In our world, easy parallels to newspaper publishers can be made, for instance, with journal publishers or the purveyors of subscription research databases (indeed the three are often one and the same). I’m sure you can point to lots of others, and I’d be very happy to hear them in comments. But what interests me most in Shirky’s piece are his ideas about how the advent of the unthinkable divides a community of practitioners. These comments hit a little closer to home. Shirky writes,

Revolutions create a curious inversion of perception. In ordinary times, people who do no more than describe the world around them are seen as pragmatists, while those who imagine fabulous alternative futures are viewed as radicals. The last couple of decades haven’t been ordinary, however. Inside the papers, the pragmatists were the ones simply looking out the window and noticing that the real world was increasingly resembling the unthinkable scenario. These people were treated as if they were barking mad. Meanwhile the people spinning visions of popular walled gardens and enthusiastic micropayment adoption, visions unsupported by reality, were regarded not as charlatans but saviors.

When reality is labeled unthinkable, it creates a kind of sickness in an industry. Leadership becomes faith-based, while employees who have the temerity to suggest that what seems to be happening is in fact happening are herded into Innovation Departments, where they can be ignored en masse. This shunting aside of the realists in favor of the fabulists has different effects on different industries at different times. One of the effects on the newspapers is that many of their most passionate defenders are unable, even now, to plan for a world in which the industry they knew is visibly going away.

Again, we probably pretty easily can point to both “realists” (who get it) and “fabulists” (who don’t or won’t) in academic publishing. But the analogy extends deeper than that. There are strong and uncomfortable parallels within our own disciplines.

The question is this: Just who are the pragmatists and who are the radicals in our departments? Maybe those of us who spend our time taking digital technologies seriously aren’t radical at all. Maybe those of us in digital humanities centers (read: “Innovation Departments”) are simply realists, while our more traditional colleagues are fabulists, faithfully clinging to ways of doing things that are already past. Listening to some colleagues talk about the dangers of Wikipedia, for instance, or the primacy of university-press-published, single-authored monographs, or problems of authority in the social tagging of collections, it certainly sometimes feels that way. Conversely what we do in digital humanities surely feels pragmatic, both day-to-day and in our broader focus on method.

Obviously we can’t and shouldn’t divide scholars so neatly into two camps. Nor do I think we should so casually dismiss traditional scholarship any more than we should uncritically celebrate the digital. Yet it’s worth thinking for a minute of ourselves as realists rather than revolutionaries. If nothing else, it may keep us focused on the work at hand.

Brand Name Scholar

Scholars may not like it, but that doesn’t change the fact that in the 21st century’s fragmented media environment, marketing and branding are key to disseminating the knowledge and tools we produce. This is especially true in the field of digital humanities, where we are competing for attention not only with other humanists and other cultural institutions, but also with titans of the blogosphere and big-time technology firms. Indeed, CHNM spends quite a bit of energy on branding—logo design, search engine optimization, cool SWAG, blogs like this one—something we view as central to our success and our mission: to get history into as many hands possible. (CHNM’s actual mission statement reads, “Since 1994 under the founding direction of Roy Rosenzweig, CHNM has used digital media and computer technology to democratize history—to incorporate multiple voices, reach diverse audiences, and encourage popular participation in presenting and preserving the past.”)

In my experience, branding is mostly a game learned by trial and error, which is the only way to really understand what works for your target audience. But business school types also have some worthwhile advice. One good place to start is a two part series on “personal branding” from Mashable, which provides some easy advice for building a brand for your self or your projects. Another very valuable resource, which was just posted yesterday, is the Mozilla Community Marketing Guide. In it the team that managed to carve out a 20% market share from Microsoft for the open source web browser Firefox provides invaluable guidance not only on branding, but also on giving public presentations, using social networking, finding sponsorships, and dealing with the media that is widely transferable to marketing digital humanities and cultural heritage projects.

It may not be pretty, but in an internet of more than one trillion pages, helping your work stand out is no sin.

(Note: I’ll be leading a lunchtime discussion of these and other issues relating to electronic marketing and outreach for cultural heritage projects later today at the IMLS WebWise conference in Washington, D.C. I’ll be using #webwise on Twitter if you’d like to follow my updates from the conference.)

Making It Count: Toward a Third Way

Over the summer there was much discussion among my colleagues about making digital humanities work “count” in academic careers. This included two fantastic threads on Mills Kelly’s Edwired blog, a great post by Kathy Davidson, and an informal chat on our own Digital Campus podcast. As usual the topic of tenure also undergirded discussions at the various digital humanities workshops and conferences I attended during June, July, and August. The cooler weather and tempers of autumn having arrived, I’d like to take a quick look back and commit to writing some of the thoughts I offered on our podcast and at these meetings.

Let me use Mills’ “Making Digital Scholarship Count” series as a starting point. For those of you who weren’t following his posts, Mills argues that if scholars want digital scholarship to count in traditional promotion and tenure decisions, then they have to make sure it conforms to the characteristics and standards of traditional scholarship (though Mills points out that some of those standards, such as peer review, will have to be modified slightly to accommodate the differences inherent in digital scholarship.) At the same time Mills suggests that we have to accept that digital work that does not fit the standards of traditional scholarship, no matter how useful or well done, will not count in traditional promotion and tenure decisions. Essentially Mills makes a distinction between digital “scholarship” and other kinds of digital “work,” the first which bears the characteristics of traditional scholarship and the second which does not. The first should count as “scholarship” in promotion and tenure decisions. The second should not. Rather it should count as “service” or something similar.

I more or less agree this, and I’m fine with Mills’ distinction. Communities have the right to set their own standards and decide what counts as this or that. But this situation does raise questions for those of us engaged primarily in the second kind of activity, in digital humanities “work.” What happens to the increasing numbers of people employed inside university departments doing “work” not “scholarship?” In universities that have committed to digital humanities, shouldn’t the work of creating and maintaining digital collections, building software, experimenting with new user interface designs, mounting online exhibitions, providing digital resources for students and teachers, and managing the institutional teams upon which all digital humanities depend count for more than service does under traditional P&T rubrics? Personally I’m not willing to admit that this other kind of digital work is any less important for digital humanities than digital scholarship, which frankly would not be possible without it. All digital humanities is collaborative, and it’s not OK if the only people whose careers benefit from our collaborations are the “scholars” among us. We need the necessary “work” of digital humanities to count for those people whose jobs are to do it.

Now I’m not arguing we bestow tenure in the history department for web design or project management, even if it’s done by people with PhD’s. What I am saying is if we’re going to do digital humanities in our departments, then we need something new. It can’t be tenure-track or nothing. With the emergence of the new digital humanities, we need some new employment models.

I myself do relatively little work that would fit traditional definitions of scholarship. Practically none of my digital work would. Because of that I am more than willing to accept that tenure just isn’t in the picture for me. With my digital bent I am asking for a change in the nature of academic work, and therefore I have to be willing to accept a change in the nature and terms of my academic employment.

That said, I am not willing to accept the second-class status of, for instance, an adjunct faculty member. My work—whether it is “scholarship” or not—wins awards, attracts hundreds of thousands of dollars in grant funding, turns up periodically on CNN and in the New York Times, enables the work of hundreds of other academics, and is used every day by thousands of people, scholars and non-scholars alike. That may not make it tenureable, but it’s certainly not second class. My work requires a “third way.”

Fortunately I’m at an institution committed to digital humanities and willing to experiment with new models of academic employment. Technically I have two titles, “Managing Director of the Center for History & New Media” and “Research Assistant Professor.” That puts me somewhere between an untenured administrative faculty member and an untenured research faculty member. It is a position which would frighten some of my tenure-track colleagues terribly, and I can, indeed, be fired from my job. Sometimes that worries me too. Then I remember that probably 99% of the rest of working Americans can also be fired from their jobs. I also remember that just like that other 99%, if I do what’s expected of me, it probably won’t happen. If I continue to win grants and awards from panels of my peers and continue to produce quality, well-received, well-used digital humanities products, I’ll probably continue to have a job. If I exceed expectations, I’ll probably advance.

Just as important to note are the benefits my job has over more traditional scholarly career paths, some of which are pretty serious. I’m not terrorized by the formalized expectations that accompany traditional P&T decisions. I won’t perish if I don’t publish. I also don’t have fixed teaching obligations. I can focus full-time on my research, and I have greater freedom and flexibility to explore new directions than most of my tenure-track colleagues. I get to work on lots of things at once. Some of these experiments are likely to fail, but as long as most succeed, that’s expected and OK. I manage my own travel budgets and research schedule rather than being held hostage to department committees. I get to work every day with a close-knit team of like-minded academics rather than alone in a library. I have considerably greater freedom to negotiate my pay and benefits. And to the extent that it advances the mission and interests of the Center for History & New Media, this blog “counts.”

Mine is not a tenure-track position, and based on the work I do, I don’t expect it to be. Nor do I care. There are some downsides and some upsides to my position, but it’s a reasonably happy third way. More importantly, I believe it is a necessary third way for the digital humanities, which in Mills’ terms require not only digital “scholarship” but also digital “work.” I’m lucky to be at an institution and to have colleagues that make this third way possible. Other institutions looking to build digital humanities capacity should follow suit. If digital humanities are going to flourish in the academy, we need both to accept and advocate for new models of academic employment.

[Image credit: Dave Morris]

Late Update (10/2/08): I very absentmindedly neglected to list my friend Margie McLellan among the important voices in this discussion. Along with Mills and Kathy Davidson, Margie’s three posts, On Defining Scholarship, Scholarship Update, and Is a Blog Scholarship?, are required reading on these matters.