Rethinking Access

[This week and next I’ll be facilitating the discussion of “Learning & Information” at the IMLS UpNext: Future of Museums and Libraries wiki. The following is adapted from the first open thread. Please leave any comments at UpNext to join in the wider discussion!]

In addition to the questions posted on the main page for this theme—I will be starting threads for each of those over the course of the next two weeks—something that has been on my mind lately is the question, “What is access?”

Over the past ten or fifteen years, libraries and museums have made great strides in putting collections online. That is an achievement in itself. But beyond a good search and usable interfaces, what responsibilities do museums and libraries have to their online visitors to contextualize those materials, to interpret them, to scaffold them appropriately for scholarly, classroom, and general use?

My personal feeling is that our definition of what constitutes “access” has been too narrow, that real access has to mean more than the broad availability of digitized collections. Rather, in my vision, true access to library and museum resources must include access to the expertise and expert knowledge that undergirds and defines our collections. This is not to say that museum and library websites don’t provide that broader kind of access; they often do. It’s just to say that the two functions are usually performed separately: first comes database access to collections material, then comes (sometimes yes, sometimes no, often depending on available funding) contextual and interpretive access.

What I’d like to see in the future—funders take note!—is a more inclusive definition of access that incorporates both things (what I’m calling database access and contextual access) from the beginning. So, in my brave new world, as a matter of course, every “access” project funded by agencies like IMLS would include support both for mounting collections online and for interpretive exhibits and other contextual and teaching resources. In this future, funding access equals funding interpretation and education.

Is this already happening? If so, how are museums and libraries treating access more broadly? If not, what problems do you see with my vision?

[Please leave comments at UpNext.]

"Soft" [money] is not a four-letter word

I will be the first to say that I have been, and continue to be, extremely lucky. As I explained in an earlier post, I have managed to strike a workable employment model somewhere between tenured professor and transient post-doc, expendable adjunct, or subservient staffer, a more or less happy “third way” that provides relative security, creative opportunity, and professional respect. The terms of my employment at the Center for History and New Media (CHNM) may not be reproducible everywhere. Nor do I see my situation as any kind of silver bullet. But it is one model that has seemed to work in a particular institutional and research context, and I offer it mainly to show that fairness doesn’t necessarily come in the form of tenure and that other models are possible.

Taking this argument further, I would also argue that fairness does not necessarily come in the form of what we in the educational and cultural sectors tend to call “hard money,” i.e. positions that are written into in our institutions’ annual budgets.

Of course, the first thing to admit about “hard money” is that it doesn’t really exist. As we have seen in the recent financial crisis, especially in layoffs of tenure-track and even tenured faculty and in the elimination of boat-loads of hard lines in library and museum budgets, hard money is only hard until someone higher up than a department chair, dean, or provost decides that it’s soft.

The second thing to acknowledge is that the concept of “hard” versus “soft” money really only exists in academe. If those terms were extended to the rest of the U.S. economy—the 90+ percent of the U.S. labor force not employed by institutions of higher education (although government may be another place where this distinction is meaningful)—we’d see that most people are on “soft” money. My wife has been employed as lawyer at a fancy “K Street” law firm in Washington, DC for going on six years now. She makes a very good living and is, by the standards of her chosen profession, very successful. And yet, you guessed it, she is on soft money. If for some reason the firm looses two, three, four of its large clients, her billing and hence the money to pay her salary will very quickly dry up, and the powers that be will be forced to eliminate her position. This is true for almost any job you can point to. If revenues do not match projections, layoffs occur. One can debate the justice of particular layoffs and down-sizings, but without wholesale changes to our economy, the basic rule of “no money in, no money out” is hard to deny.

Indulge me for a moment in a bit of simile. In some ways, CHNM is very much like any other business. At CHNM we have clients. Those clients are our funders. We sell products and services to those clients. Those products and services are called digital humanities projects. Our funder clients pay us a negotiated price for those products and services. We use those revenues to pay the employees who produce the products and services for our clients. To keep the wheels turning, we sell more products and services to our clients, and if an existing client doesn’t want or need what we’re selling anymore, we either find new clients or change the range of products and services we offer. Failing that, we will have to start reducing payroll.

How is this situation any different or worse than any other sector of the economy? If people stop buying Word and Excel, Microsoft will have to find something else to sell people or layoff the engineers, designers, project managers and other staff that make MS Office.

I understand that so crass an analogy to corporate America will make many people unhappy. The idealist in me recoils from the notion that the academy should be treated as just another business. Yet the pragmatist in me—a side that is certainly stronger than it would otherwise be from dealing for so long with the often very practical, hands-on work of digital humanities and the frequent sleepless nights that come with the responsibility of managing a budget that supports nearly fifty employees—thinks it foolish to reject out of hand employment models that, however imperfect, have worked to produce so much and provide livelihoods for so many. (Indeed, the democrat in me also has to ask, what makes us in academe so special as to deserve and expect freedoms, security, and privileges that the rest of the labor force doesn’t?)

Therefore, in my book, “soft money” isn’t necessarily and always bad. If it funds good, relatively secure, fairly compensated jobs, in my book soft money is OK. CHNM has several senior positions funded entirely on soft money and several employees who have been with us on soft money for five, six, and seven years—a long time in the short history of digital humanities.

What isn’t OK is when “soft” equals “temporary” or “term.” This, I readily acknowledge, is an all too frequent equation. Many, if not most, soft money post-doc, research faculty, and staff positions are created upon the award of a particular grant to work on that grant and that grant alone, and only until the term of the grant expires. I make no bones that these defined-term, grant-specific jobs are inferior to tenure or tenure-track or even corporate-sector employment.

At CHNM we try to avoid creating these kinds of jobs. Since at least 2004, instead of hiring post-docs or temporary staff to work on a particular grant funded project when it is awarded, where possible we try to hire people to fill set of generalized roles that have evolved over the years and proven themselves necessary to the successful completion of nearly any digital humanities project: designer, web developer, project manager, outreach specialist. Generally our people are not paid from one grant, but rather from many grants. At any given moment, a CHNM web designer, for example, may be paid from as many as four or five different grant budgets, her funding distribution changing fairly frequently as her work on a particular project ends and work on another project begins. This makes for very complicated accounting and lots of strategic human resource decisions (this is one of the big headaches of my job), but it means that we can keep people around as projects start and end and funders come and go. Indeed as the funding mosaic becomes ever more complex, when viewed from a distance (i.e. by anyone but me and a few other administrative staff who deal with the daily nitty-gritty) the budget picture begins to look very much like a general fund and staff positions begin to look like budget lines.

Perceptive readers will by now be asking, “Yes, but how did CHNM get to the point where it had enough grants and had diversified its funding enough to maintain what amounts to a permanent staff?” and I’ll readily admit there is a chicken-and-egg problem here. But how CHNM got to where it is today is a topic for another day. The point I’d like to make today is simply that—if we can get beyond thinking about project funding—soft money isn’t essentially bad for either the people funded by it or the institution that relies on it. On the contrary, it can be harnessed toward the sustainable maintenance of an agile, innovation centered organization. While the pressure of constantly finding funding can be stressful and a drag, it doesn’t have to mean bad jobs and a crippled institution.

Just the opposite, in fact. Not only does CHNM’s diversified soft money offer its people some relative security in their employment, pooling our diversified grant resources to create staff stablity also makes it easier for us to bring in additional revenue. Having people in generalized roles already on our payroll allows us to respond with confidence and speed as new funding opportunities present themselves. That is, our financial structure has enabled us to build the institutional capacity to take advantage of new funding sources, to be confident that we can do the work in question, to convince funders that is so, and in turn to continue to maintain staff positions and further increase capacity.

CHNM is by no means perfect. Not all jobs at CHNM are created equal, and like everyone in the digital humanities we struggle to make ends meet and keep the engine going. In a time of increasingly intense competition for fewer and fewer grant dollars, there is always a distinct chance that we’ll run out of gas. Nevertheless, it is soft money that so far has created a virtuous and, dare I say, sustainable cycle.

Thus, when we talk about soft money, we have to talk about what kind of soft money and how it is structured and spent within an institution. Is it structured to hire short term post-docs and temporary staff who will be let go at the end of the grant? Or is it structured and diversified in such a way as to provide good, relatively stable jobs where staff can build skills and reputation over a period of several years?

When soft money means temporary and insecure, soft money is bad. When soft money facilitates the creation of good jobs in digital humanities, in my book at least, soft money is OK.

[Note: This post is part of a draft of a longer article that will appear in a forthcoming collection to be edited by Bethany Nowviskie on alternative careers for humanities scholars.]

[Image credits: Denni Schnapp, identity chris is.]

3 Innovation Killers in Digital Humanities

Here’s a list of three questions one might overhear in a peer review panel for digital humanities funding, each of which can kill a project in its tracks:

  • Haven’t X, Y, and Z already done this? We shouldn’t be supporting duplication of effort.
  • Are all of the stakeholders on board? (Hat tip to @patrickgmj for this gem.)
  • What about sustainability?

In their right place, each of these are valid criticisms. But they shouldn’t be levied reflexively. Sometimes X, Y, and Z’s project stinks, or nobody uses it, or their code is lousy. Sometimes stakeholders can’t see through the fog of current practice and imagine the possible fruits of innovation. Sometimes experimental projects can’t be sustained. Sometimes they fail altogether.

If we are going to advance a field as young as digital humanities, if we are going to encourage innovation, if we are going to lift the bar, we sometimes have to be ready to accept “I don’t know, this is an experiment” as a valid answer to the sustainability question in our grant guidelines. We are sometimes going to have to accept duplication of effort (aren’t we glad someone kept experimenting with email and the 1997 version of Hotmail wasn’t the first and last word in webmail?) And true innovation won’t always garner broad support among stakeholders, especially at the outset.

Duplication of effort, stakeholder buy in, and sustainability are all important issues, but they’re not all important. Innovation requires flexibility, an acceptance of risk, and a measure of trust. As Dorthea Salo said on Twitter, when considering sustainability, for example, we should be asking “‘how do we make this sustainable?’ rather than ‘kill it ‘cos we don’t know that it is.'” As Rachel Frick said in the same thread, in the case of experimental work we must accept that sustainability can “mean many things,” for example “document[ing] the risky action and results in an enduring way so that others may learn.”

Innovation makes some scary demands. Dorthea and Rachel present some thoughts on how to manage those demands with the other, legitimate demands of grant funding. We’re going to need some more creative thinking if we’re going to push the field forward.

Late update (10/16/09): Hugh Cayless at Scriptio Continua makes the very good, very practical point that “if you’re writing a proposal, assume these objections will be thrown at it, and do some prior thinking so you can spike them before they kill your innovative idea.” An ounce of prevention is worth a pound of cure … or something like that.

Thinking the Unthinkable

Clay Shirky’s widely circulated post, Newspapers and Thinking the Unthinkable, has got me thinking about the “unthinkable” in humanities scholarship. According to Shirky, in the world of print journalism, the unthinkable was the realization that newspapers would not be able to transfer their scarcity-of-information-based business model to the internet. It was publishers’ inability to imagine a business model for a world in which information is easily distributed that led to the crisis in which newspapers find themselves today. He writes,

The unthinkable scenario unfolded something like this: The ability to share content wouldn’t shrink, it would grow. Walled gardens would prove unpopular. Digital advertising would reduce inefficiencies, and therefore profits. Dislike of micropayments would prevent widespread use. People would resist being educated to act against their own desires. Old habits of advertisers and readers would not transfer online. Even ferocious litigation would be inadequate to constrain massive, sustained law-breaking. (Prohibition redux.) Hardware and software vendors would not regard copyright holders as allies, nor would they regard customers as enemies. DRM’s requirement that the attacker be allowed to decode the content would be an insuperable flaw. And, per Thompson, suing people who love something so much they want to share it would piss them off.

In our world, easy parallels to newspaper publishers can be made, for instance, with journal publishers or the purveyors of subscription research databases (indeed the three are often one and the same). I’m sure you can point to lots of others, and I’d be very happy to hear them in comments. But what interests me most in Shirky’s piece are his ideas about how the advent of the unthinkable divides a community of practitioners. These comments hit a little closer to home. Shirky writes,

Revolutions create a curious inversion of perception. In ordinary times, people who do no more than describe the world around them are seen as pragmatists, while those who imagine fabulous alternative futures are viewed as radicals. The last couple of decades haven’t been ordinary, however. Inside the papers, the pragmatists were the ones simply looking out the window and noticing that the real world was increasingly resembling the unthinkable scenario. These people were treated as if they were barking mad. Meanwhile the people spinning visions of popular walled gardens and enthusiastic micropayment adoption, visions unsupported by reality, were regarded not as charlatans but saviors.

When reality is labeled unthinkable, it creates a kind of sickness in an industry. Leadership becomes faith-based, while employees who have the temerity to suggest that what seems to be happening is in fact happening are herded into Innovation Departments, where they can be ignored en masse. This shunting aside of the realists in favor of the fabulists has different effects on different industries at different times. One of the effects on the newspapers is that many of their most passionate defenders are unable, even now, to plan for a world in which the industry they knew is visibly going away.

Again, we probably pretty easily can point to both “realists” (who get it) and “fabulists” (who don’t or won’t) in academic publishing. But the analogy extends deeper than that. There are strong and uncomfortable parallels within our own disciplines.

The question is this: Just who are the pragmatists and who are the radicals in our departments? Maybe those of us who spend our time taking digital technologies seriously aren’t radical at all. Maybe those of us in digital humanities centers (read: “Innovation Departments”) are simply realists, while our more traditional colleagues are fabulists, faithfully clinging to ways of doing things that are already past. Listening to some colleagues talk about the dangers of Wikipedia, for instance, or the primacy of university-press-published, single-authored monographs, or problems of authority in the social tagging of collections, it certainly sometimes feels that way. Conversely what we do in digital humanities surely feels pragmatic, both day-to-day and in our broader focus on method.

Obviously we can’t and shouldn’t divide scholars so neatly into two camps. Nor do I think we should so casually dismiss traditional scholarship any more than we should uncritically celebrate the digital. Yet it’s worth thinking for a minute of ourselves as realists rather than revolutionaries. If nothing else, it may keep us focused on the work at hand.

Brand Name Scholar

Scholars may not like it, but that doesn’t change the fact that in the 21st century’s fragmented media environment, marketing and branding are key to disseminating the knowledge and tools we produce. This is especially true in the field of digital humanities, where we are competing for attention not only with other humanists and other cultural institutions, but also with titans of the blogosphere and big-time technology firms. Indeed, CHNM spends quite a bit of energy on branding—logo design, search engine optimization, cool SWAG, blogs like this one—something we view as central to our success and our mission: to get history into as many hands possible. (CHNM’s actual mission statement reads, “Since 1994 under the founding direction of Roy Rosenzweig, CHNM has used digital media and computer technology to democratize history—to incorporate multiple voices, reach diverse audiences, and encourage popular participation in presenting and preserving the past.”)

In my experience, branding is mostly a game learned by trial and error, which is the only way to really understand what works for your target audience. But business school types also have some worthwhile advice. One good place to start is a two part series on “personal branding” from Mashable, which provides some easy advice for building a brand for your self or your projects. Another very valuable resource, which was just posted yesterday, is the Mozilla Community Marketing Guide. In it the team that managed to carve out a 20% market share from Microsoft for the open source web browser Firefox provides invaluable guidance not only on branding, but also on giving public presentations, using social networking, finding sponsorships, and dealing with the media that is widely transferable to marketing digital humanities and cultural heritage projects.

It may not be pretty, but in an internet of more than one trillion pages, helping your work stand out is no sin.

(Note: I’ll be leading a lunchtime discussion of these and other issues relating to electronic marketing and outreach for cultural heritage projects later today at the IMLS WebWise conference in Washington, D.C. I’ll be using #webwise on Twitter if you’d like to follow my updates from the conference.)

Making It Count: Toward a Third Way

Over the summer there was much discussion among my colleagues about making digital humanities work “count” in academic careers. This included two fantastic threads on Mills Kelly’s Edwired blog, a great post by Kathy Davidson, and an informal chat on our own Digital Campus podcast. As usual the topic of tenure also undergirded discussions at the various digital humanities workshops and conferences I attended during June, July, and August. The cooler weather and tempers of autumn having arrived, I’d like to take a quick look back and commit to writing some of the thoughts I offered on our podcast and at these meetings.

Let me use Mills’ “Making Digital Scholarship Count” series as a starting point. For those of you who weren’t following his posts, Mills argues that if scholars want digital scholarship to count in traditional promotion and tenure decisions, then they have to make sure it conforms to the characteristics and standards of traditional scholarship (though Mills points out that some of those standards, such as peer review, will have to be modified slightly to accommodate the differences inherent in digital scholarship.) At the same time Mills suggests that we have to accept that digital work that does not fit the standards of traditional scholarship, no matter how useful or well done, will not count in traditional promotion and tenure decisions. Essentially Mills makes a distinction between digital “scholarship” and other kinds of digital “work,” the first which bears the characteristics of traditional scholarship and the second which does not. The first should count as “scholarship” in promotion and tenure decisions. The second should not. Rather it should count as “service” or something similar.

I more or less agree this, and I’m fine with Mills’ distinction. Communities have the right to set their own standards and decide what counts as this or that. But this situation does raise questions for those of us engaged primarily in the second kind of activity, in digital humanities “work.” What happens to the increasing numbers of people employed inside university departments doing “work” not “scholarship?” In universities that have committed to digital humanities, shouldn’t the work of creating and maintaining digital collections, building software, experimenting with new user interface designs, mounting online exhibitions, providing digital resources for students and teachers, and managing the institutional teams upon which all digital humanities depend count for more than service does under traditional P&T rubrics? Personally I’m not willing to admit that this other kind of digital work is any less important for digital humanities than digital scholarship, which frankly would not be possible without it. All digital humanities is collaborative, and it’s not OK if the only people whose careers benefit from our collaborations are the “scholars” among us. We need the necessary “work” of digital humanities to count for those people whose jobs are to do it.

Now I’m not arguing we bestow tenure in the history department for web design or project management, even if it’s done by people with PhD’s. What I am saying is if we’re going to do digital humanities in our departments, then we need something new. It can’t be tenure-track or nothing. With the emergence of the new digital humanities, we need some new employment models.

I myself do relatively little work that would fit traditional definitions of scholarship. Practically none of my digital work would. Because of that I am more than willing to accept that tenure just isn’t in the picture for me. With my digital bent I am asking for a change in the nature of academic work, and therefore I have to be willing to accept a change in the nature and terms of my academic employment.

That said, I am not willing to accept the second-class status of, for instance, an adjunct faculty member. My work—whether it is “scholarship” or not—wins awards, attracts hundreds of thousands of dollars in grant funding, turns up periodically on CNN and in the New York Times, enables the work of hundreds of other academics, and is used every day by thousands of people, scholars and non-scholars alike. That may not make it tenureable, but it’s certainly not second class. My work requires a “third way.”

Fortunately I’m at an institution committed to digital humanities and willing to experiment with new models of academic employment. Technically I have two titles, “Managing Director of the Center for History & New Media” and “Research Assistant Professor.” That puts me somewhere between an untenured administrative faculty member and an untenured research faculty member. It is a position which would frighten some of my tenure-track colleagues terribly, and I can, indeed, be fired from my job. Sometimes that worries me too. Then I remember that probably 99% of the rest of working Americans can also be fired from their jobs. I also remember that just like that other 99%, if I do what’s expected of me, it probably won’t happen. If I continue to win grants and awards from panels of my peers and continue to produce quality, well-received, well-used digital humanities products, I’ll probably continue to have a job. If I exceed expectations, I’ll probably advance.

Just as important to note are the benefits my job has over more traditional scholarly career paths, some of which are pretty serious. I’m not terrorized by the formalized expectations that accompany traditional P&T decisions. I won’t perish if I don’t publish. I also don’t have fixed teaching obligations. I can focus full-time on my research, and I have greater freedom and flexibility to explore new directions than most of my tenure-track colleagues. I get to work on lots of things at once. Some of these experiments are likely to fail, but as long as most succeed, that’s expected and OK. I manage my own travel budgets and research schedule rather than being held hostage to department committees. I get to work every day with a close-knit team of like-minded academics rather than alone in a library. I have considerably greater freedom to negotiate my pay and benefits. And to the extent that it advances the mission and interests of the Center for History & New Media, this blog “counts.”

Mine is not a tenure-track position, and based on the work I do, I don’t expect it to be. Nor do I care. There are some downsides and some upsides to my position, but it’s a reasonably happy third way. More importantly, I believe it is a necessary third way for the digital humanities, which in Mills’ terms require not only digital “scholarship” but also digital “work.” I’m lucky to be at an institution and to have colleagues that make this third way possible. Other institutions looking to build digital humanities capacity should follow suit. If digital humanities are going to flourish in the academy, we need both to accept and advocate for new models of academic employment.

[Image credit: Dave Morris]

Late Update (10/2/08): I very absentmindedly neglected to list my friend Margie McLellan among the important voices in this discussion. Along with Mills and Kathy Davidson, Margie’s three posts, On Defining Scholarship, Scholarship Update, and Is a Blog Scholarship?, are required reading on these matters.

Thoughts on THATCamp

2539671619_45e0d02289.jpg Last week CHNM hosted the inaugural THATCamp to what seemed to me like great success. Short for “The Humanities and Technology Camp,” THATCamp is a BarCamp-style, user-generated “unconference” on digital humanities. Structurally, it differs from an ordinary conference in two ways: first in that its sessions are organized by participants themselves (ahead of time through a blog, but mainly on the day of the conference) rather than by a program committee, and second in that everyone is expected to participate actively—to present a project, share some skill, and collaborate with fellow participants. We first started thinking about THATCamp as many as two or three years ago, and I was thrilled to see it finally get off the ground, thanks in large part to the extraordinary efforts and energy of Jeremy Boggs and Dave Lester, who will be presenting their own thoughts on the matter in a forthcoming episode of THATPodcast.

To begin with let me say the sessions were fantastic. I particularly benefited from conversations on F/OSS design and development processes, event standards, and sustainability. Nevertheless I have to admit I was just as interested in the process of THATCamp as I was in its products. Throughout the weekend I was paying as much attention to how THATCamp worked as to the work that was actually done there. I’d like to share three observations in this regard:

  • First and foremost, I think it is very important to stress that THATCamp was cheap. The cost of the weekend was around $3000. Total. That included a fairly lavish breakfast and lunch buffet on both days, lots of caffenated drinks, t-shirts for everyone involved, pretty badges and lanyards, office supplies (post-its, pens), room fees, and a couple student travel stipends. Those modest costs were paid through a combination of sponsorships (the GMU provost’s office, NiCHE, NYPL, and CHNM’s own Zotero project) and voluntary donations from THATCamp participants (we suggested $20 and passed a hat around on the first day). Most participants had to fund their own travel, but still.
  • Second, THATCamp was honest. Mills has already pointed out how the unconference sessions at THATCamp were so much more engaging than the standard “panelist reads at you” conference session model. That’s certainly true. But it wasn’t just the format that made these discussions more useful. It was the attitude. At most scholarly conferences, everyone seems to have something to prove—specifically, how smart they are. We have all seen people shouted down at conferences and how destructive that can be, especially to a young scholar (I have seen people in tears). But at THATCamp, instead of trying to out-smart each other, campers came clean about their failures as well as their successes, their problems as well as their solutions. By admitting, rather than covering up, gaps in their knowledge, campers were able to learn from each other. This honesty made THATCamp truly productive.
  • Third, THATCamp was democratic. In large part because Jeremy and Dave (both students as well as kickass digital humanists) did most of the work, but also because of the transparency, informality, and openness of the process and discussions, professional status didn’t seem to count for much at THATCamp. Full professors, associate professors, assistant professors, research faculty, museum and library professionals from big and small institutions at all levels, and graduate students seemed to mix easily and casually. More than once I saw a student or young professional challenge a more senior colleague. Even more often I saw the groups laughing, chatting, sharing ideas. That’s good for everybody.

I’m not going to lie. THATCamp was a ton of work, and it wasn’t perfect by any means. I’m not sure, for instance, how many publications will result from the sessions. But I do think it was a truly different and useful way of forging new collaborations, building a community of practice, making connections to people with answers to your questions, supporting student work and thought, and solving practical problems. The model is particularly appropriate for a very hands-on discipline like digital humanities, but the three observations above suggest it should and could easily be extended to other, more traditional disciplines. Mills has already called on the American Historical Association to dedicate 5% of its program THATCamp-style activities, and Margie McLellan is hoping to encourage the Oral History Association to do the same. I’d also encourage humanities departments, graduate student committees, and other research institutions to try. We all lament the lack of community and collegiality in our profession and decry the cutthroat competitiveness in our fields. It seems to me that THATCamp is a cheap and easy antidote.

[Image: “Dork Shorts” session sign-up board, credit Dave Lester.]

Twitter, Downtime, and Radical Transparency

status_header-1.png

Listeners to the most recent episode of Digital Campus will know that I’m a fairly heavy user of Twitter, the weirdly addictive and hard-to-describe microblogging and messaging service. But anyone who uses the wildly popular service regularly will also know that the company’s service architecture has not scaled very well. During the last month or so, as hundreds of thousands have signed up and started “tweeting,” it has sometimes seemed like Twitter is down as often as it’s up.

Considering the volume and complexity of the information they’re serving, and the somewhat unexpectedness of the service’s popularity, I tend not to blame Twitter for its downtime. As a member of an organization that runs its own servers (with nowhere near the load of Twitter, mind you), I sympathize with Twitter’s situation. Keeping a server up is a relentless, frustrating, unpredictable, and scary task. Yet as a user of Twitter, I still get pretty annoyed when I can’t access my friends’ tweets or when one of mine disappears into the ether.

It’s clear, however, that Twitter is working very hard to rewrite its software and improve its network infrastructure. How do I know this? First, it seems like some of the problems are getting better. Second, and more important, for the last week or so, Twitter has been blogging its efforts. The Twitter main page now includes a prominent link to the Twitter Status blog, where managers and engineers post at least daily updates about the work they’re doing and the problems they’re facing. The blog also includes links to uptime statistics, developer forums, and other information sharing channels. Twitter’s main corporate blog, moreover, contains longer posts about these same issues, as well as notes on other uncomfortable matters such as users’ concerns about privacy under Twitter’s terms of service.

Often, an organization facing troubles—particularly troubles of its own making—does everything it can to hide the problem, its cause, and its efforts to fix it. Twitter has decided on a different course. Twitter seems to have realized that its very committed, very invested user base would prefer honesty and openness to obfuscation and spin. By definition, Twitter users are people who have put themselves out there on the web. Twitter’s managers and engineers have realized that those users expect nothing less of the company itself.

As a Twitter user, the company’s openness about its difficulties has made me more patient, more willing to forgive them an occasional outage or slowdown. There is a lesson in this for digital and public historians. Our audiences are similarly committed. We work very hard to make sure they feel like we’re all in this together. We should remember this when we have problems, such as our own network outages (CHNM is experiencing one right now, btw) and technical shortcomings.

We are open with our successes. We should be open with our problems as well. Our audiences and partners will reward us with their continued loyalty and (who knows?) maybe even help.

Sunset for Ideology, Sunrise for Methodology?

Sometimes friends in other disciplines ask me the question, “So, what are the big ideas in history these days?” I then proceed to fumble around for a few minutes trying to put my finger on some new “-ism” or competing “-isms” to describe and define today’s historical discourse. Invariably, I come up short.

Growing up in the second half of the 20th century, we are prone to think about our world and our work in terms of ideologies. Late 20th century historical discourse was dominated by a succession of ideas and theoretical frameworks. This mirrored the broader cultural and political discourse in which our work was set. For most of the last 75 years of the 20th century, Socialism, Fascism, Existentialism, Structuralism, Post-Structuralism, Conservatism, and other ideologies vied with one another broadly in our politics and narrowly at our academic conferences.

327577395_991a9ab4e4_m.jpg But it wasn’t always so. Late 19th and early 20th century scholarship was dominated not by big ideas, but by methodological refinement and disciplinary consolidation. Denigrated in the later 20th century as unworthy of serious attention by scholars, the 19th and early 20th century, by contrast, took activities like philology, lexicology, and especially bibliography very seriously. Serious scholarship was concerned as much with organizing knowledge as it was with framing knowledge in an ideological construct. Take my sub-discipline, the history of science, as an example. Whereas the last few decades of research have been dominated by a debate over the relative merits of “constructivism” (the idea, in Jan Golinski’s succinct definition, “that scientific knowledge is a human creation, made with available material and cultural resources, rather than simply the revelation of a natural order that is pre-given and independent of human action”), the history of science was in fact founded in an outpouring of bibliograpy. The life work of the first great American historian of science, George Sarton, was not an idea, but a journal (Isis), a professional society (the History of Science Society), a department (Harvard’s), a primer (his Introduction to the History of Science), and especially a bibliography (the Isis Cumulative Bibliography). Tellingly, the great work of his greatest pupil, Robert K. Merton, was an idea: the younger Merton’s “Science, Technology and Society in Seventeenth Century England” defined history of technology as social history for a generation. By the time Merton was writing in the 1930s, the cultural climate had changed and the consolidating and methodological activities of the teacher were giving way to the ideological and theoretical activities of the student.

I believe we are at a similar moment of change right now, that we are entering a new phase of scholarship that will be dominated not by ideas, but once again by organizing activities, both in terms of organizing knowledge and organizing ourselves and our work. My difficulty in answering the question “What’s the big idea in history right now?” stems from the fact that, as a digital historian, I traffic much less in new theories than in new methods. The new technology of the Internet has shifted the work of a rapidly growing number of scholars away from thinking big thoughts to forging new tools, methods, materials, techniques, and modes or work which will enable us to harness the still unwieldy, but obviously game-changing, information technologies now sitting on our desktops and in our pockets. These concerns touch all scholars. Our Zotero research management tool is used by three quarters of a million people, all of them grappling with the problem of information overload. And although much of the discussion remains informal, it’s no accident that Wikipedia is right now one of the hottest topics for debate amongst scholars.

Perhaps most telling is the excitement that now (or really, once again) surrounds the library. If you haven’t been to a library conference lately, I suggest you do so. The buzz amongst librarians these days dwarfs anything I have seen in my entire career amongst historians. The terms “library geek” and “sexy librarian” have gained new currency as everyone begins to recognize the potential of exciting library-centered projects like Google Books.

All of these things—collaborative encylcopedism, tool building, librarianship—fit uneasily into the standards of scholarship forged in the second half of the 20th century. Most committees for promotion and tenure, for example, must value single authorship and the big idea more highly than collaborative work and methodological or disciplinary contribution. Even historians find it hard to internalize the fact that their own norms and values have and will again change over time. But change they must. In the days of George Sarton, a thorough bibliography was an achievement worthy of great respect, and an office closer to the reference desk in the library an occasion for great celebration (Sarton’s small suite in Study 189 of Harvard’s Widener Library was the epicenter of history of science in America for more than a quarter century). As we tumble deeper into the Internet age, I suspect it will be again.

[Image credit: Alex Pang; Quote: Jan Golinski, Making Natural Knowledge (Cambridge University Press, 1998), p. 6.]

Twitter as a tool for outreach

In an earlier post I wrote about the early buzz around Omeka, both in the forums and among education, museum, public history, and library bloggers. One thing I didn’t mention—and frankly did not expect—was the buzz about Omeka on Twitter, the popular SMS-centered microblogging, won’t-get-it-till-you’ve-used-it social networking platform.

twitter.pngTwitter has been getting a lot of attention lately as a tool for use in the classroom, including an insightful blog post and front-page video segment on the Chronicle of Higher Education website by University of Texas at Dallas professor David Parry. It turns out Twitter has also been a great way to build a community around Omeka—to get in touch with possible users, to keep in touch with existing users, to give the product a personality, and to provide information and support. Among other things, we have been answering technical questions using Twitter, connecting far-flung users with Twitter, and pointing to blog posts and press coverage on Twitter. Because the barrier to participation is so low—Twitter only allows messages of 140 characters or less—people seem more willing to participate in the discussion than if it were occurring on a traditional bulletin board or even in full length blog posts. Because every posting on Twitter is necessarily short, sweet, informal, and free from grammatical constraints, I think people feel freer just to say what’s on their minds. Because Twitter asks its users to respond to a very specific and very easily answered question—”What are you doing?”—it frees them (and us) from the painstaking and time consuming work of crafting a message and lets people just tell us how they’re getting on with Omeka. And because Twitter updates can be sent and received in many different ways from almost anywhere (via text message, on the web, via instant message), the Omeka Twitter community has a very active, very present feel about it.

I’m very encouraged by all this, not just for the narrow purposes of Omeka, but for digital humanities and public history outreach in general. Interactivity, audience participation, and immediacy are longstanding values of both public history and digital humanities, and Twitter very simply and subtly facilitates them all. The experience of the last week has proved to me that we should be doing this for all future projects at CHNM, not just our software projects like Omeka and Zotero, but also for our online collecting projects like the Hurricane Digital Memory Bank, our public exhibitions like the forthcoming Gulag: Many Days, Many Lives, and our education projects like the forthcoming Making the History of 1989.

For now, if you’d like to join the Omeka Twitter community, you can sign up for a Twitter account and start following Omeka. If you’re not quite ready to dive in head first, or if you just want to keep an eye on what other Omeka followers are doing, you can simply subscribe to the “Omeka and Friends” public feed. Finally, if you want to see what I’m up to as well, you can find me on Twitter at (no surprise) FoundHistory.