The Pseudoiterative Academic

With the semester having just ended, many of us are settling into new summertime routines and hoping those routines supports both some research productivity and some measure of relaxation. For me, in addition to the transition from semester to summer, I’m also transitioning into a more active period for our Greenhouse Studios initiative: our first projects are entering their intensive build sprints, we’re hiring our first full-time staff, and we’re preparing to move into a new, custom-built space on the first floor of the library.

Times of transition are times when we establish new habits—good ones or bad ones—and I’m trying to keep this passage from Kim Stanley Robinson’s 2312 in mind as I transition to summer and Greenhouse Studios embarks on a new phase in its development:

Habits begin to form at the very first repetition. After that there is a tropism toward repetition, for the patterns involved are defenses, bulwarks against time and despair. Wahram was very aware of this, having lived the process many times; so he paid attention to what he did when he traveled, on the lookout for those first repetitions that would create the pattern of that particular moment in his life. So often the first time one did things they were contingent, accidental, and not necessarily good things on which to base a set of habits. There was some searching to be done, in other words, some testing of different possibilities. That was the interregnum, in fact, the naked moment before the next exfoliation of habits, the time when one wandered doing things randomly. The time without skin, the raw data, the being-in-the-world. They came a bit too often for his taste. Most of the terraria offering passenger transport around the solar system were extremely fast, but even so, trips often took weeks. This was simply too much time to be banging around aimlessly; doing that one could easily slide into a funk or some other kind of mental hibernation. In the settlements around Saturn this sort of thing had sometimes been developed into entire sciences and art forms. But any such hebephrenia was dangerous for Wahram, as he had found out long before by painful experience. Too often in his past, meaninglessness had gnawed at the edges of things. He needed order, and a project; he needed habits. In the nakedness of the moments of exfoliation, the intensity of experience had in it a touch of terror— terror that no new eaning would blossom to replace the old ones now lost. Of course there was no such thing as a true repetition of anything; ever since the pre-Socratics that had been clear, Heraclitus and his un-twice-steppable river and so on. So habits were not truly iterative, but pseudoiterative. The pattern of the day might be the same, in other words, but the individual events fulfilling the pattern were always a little bit different. Thus there was both pattern and surprise, and this was Wahram’s desired state: to live in a pseudoiterative. But then also to live in a good pseudoiterative, an interesting one, the pattern constructed as a little work of art. No matter the brevity of a trip, the dullness of the terrarium or the people in it, it was important to invent a pattern and a project and pursue it with all his will and imagination. It came to this: shipboard life was still life. All days had to be seized.

Carpe diem.

When UConn broke up with Adobe: A parable of artists and copyright

One of the things I try very hard to do in my DMD 2010 “History of Digital Culture” class is to teach students that their technology choices are not inevitable nor even determined primarily by what’s “best,” but rather that their technology choices are values choices, reflections of their ethical commitments and those of the communities that create and use those technologies.

When the University of Connecticut’s  UITS (University Information Technology Services) made a choice not to renew it’s Adobe Creative Cloud site license, my students correctly judged that this was a values choice about the relative importance the higher administration places on artistic work at the university. The decision not to support software for artists, while at the same time maintaining support for software for, say, engineers, is a statement about how the university values different kinds of work on campus. I was pleased that the students immediately saw that this wasn’t just a choice about the quality of the software or even its cost, but about the intellectual commitments and identity of the university. What the students didn’t so easily grasp, however, was that the controversy over the Adobe suite also reflects on the values choices of the students, on the values choices that digital artists have made over many years to put the Adobe suite and other expensive, proprietary, closed-source software packages at the center of their creative practice, which in turn stems from set of larger choices artists have made vis à vis our prevailing copyright regime.

Artists have largely chosen think about copyright a something that exists to protect them and their work, and they have generally supported our ever-stricter copyright regime. Moving from a humanities and social sciences faculty to a fine arts faculty when I came to UConn from George Mason in 2013, I was struck by how poorly my storm-the-barricades, anti-copyright, open access agenda went over with my colleagues. Not that anyone really cared, but it was apparent from the beginning that I was coming at conversations that touched upon intellectual property (for example, a conversation about making faculty syllabi freely available on the web) from one side of the fence and they were coming at them from the other.  Indeed, UConn’s School of Fine Arts offers a course on copyright for artists called Protecting the Creative Spirit: The Law and the Arts, which is taught by two lawyers. You can tell from the title of the course where its sympathies lie.

My DMD 2010 students (most of whom are freshman and sophomores studying in the department of Digital Media & Design which resides within the School of Fine Arts) are no exception. When I teach the unit on copyright, the first question I ask the class is, “What is the purpose of copyright.” Inevitably, students answer with some version of “to keep people from ripping you off.” My next move is to put the copyright clause of the Constitution up on the overhead and explain to them that, in fact, the purpose of copyright is to “Promote the Progress of Science and useful Arts” and that protecting an author’s exclusive rights for a limited term is simply a means to an end.

What is more, I tell them that the ever-stricter copyright regime we live with today wasn’t really designed to protect artists artists at all, although some may have used and benefited its protections. Instead, it was designed by and for big corporations, and it does a much better job of protecting those corporations than it does of protecting individual artists. It is true that many of these corporations employ artists (several former DMD 2010 students are now working for Disney), but those artists’ works are works for hire. The works may be protected by copyright law, but they are protected to the benefit of the employer, not the employee.

It is telling that the feelings of outrage and abandonment regarding the UITS Adobe announcement weren’t evenly distributed among my students. Digital Media & Design students at UConn choose from six different “concentrations,” electing to focus on either 2D animation/motion graphics; 3D animation; game design and development; web design and development; digital media business strategies; or digital culture, learning, and advocacy. (Students from all concentrations are required to take DMD 2010.) Especially hard hit by the news were the 2D/motion graphics students, for whom Adobe After Effects sits at the heart of their practice and for which there really isn’t a substitute, commercial or open source. Letting the Adobe license lapse was basically going to kill their creative practice, or, at the very least, put them out several hundred dollars.

My web design and development students, on the other hand, felt sympathy for their colleagues, but were pretty blasé about the whole thing. For them, letting the Adobe license lapse wouldn’t really change anything. The Adobe corporation has very little leverage over a web developer. To drive the point home, I challenged these web development students to think of a single piece of software that, if taken away from them, would affect their practice in any significant way. A few came up with TCP/IP, but quickly corrected themselves: TCP/IP is a protocol not a piece of software and is an open standard in any case. Apache was another, but, again, it’s open source, and there are serviceable alternatives. Certainly, they couldn’t name a corporation that exists that could raise its prices and bring their web development work to a halt in the way that Adobe was threatening to stop the work of our motion graphics artists. The difference, of course, is that our web developers rely on an open source technology stack and our motion graphics artists rely on proprietary software protected by a copyright law that was written in part by the very companies that produce it. Our web developers are not captive to copyright. Our motion graphics artists are.

Far from protecting artists, this is the best example I have of how our overly restrictive copyright regime harms artists. Rather than teaching our students how to situate their creative practice within a framework of intellectual property protection and thereby reinforce a copyright regime that wasn’t put in place for them in the first place, we should be encouraging our students to resist this regime. We should be teaching them to advocate for open access and open source software. In the longer term, we should be helping them to develop open source and open access alternatives themselves. This is an especially important message for my digital media and design students who, with their considerable skills, will be in a position to effect the longer term project of building the open source tools that will be necessary to free artists’ creative practice from propriety software. In the long term, maybe the very long term, this is the only way we can keep digital artists from being held hostage to corporations as Adobe held my students hostage this semester.

Fortunately, we’ve sorted out the Adobe license issue for now by cutting a licensing deal (shall we call it a hostage negotiation?) apart from UITS for students enrolled in the School of Fine Arts. For now, our students are safe. But only for now. You can bet I’ll be screaming this example over the fence at my colleagues in the School of Fine Arts the next time we talk about copyright.

My new outfit: Greenhouse Studios | Scholarly Communications Design at the University of Connecticut

Looking down the page, it seems I haven’t posted here on the ol’ blog in nearly three years. Not coincidentally, that’s about when I started work on the initiative I’m pleased to announce today. It was in the fall of 2014 that I first engaged in conversations with my UConn colleagues (especially Clarissa Ceglio, Greg Colati, and Sara Sikes, but lots of other brilliant folks as well) and program officers at the Andrew W. Mellon Foundation about the notion of a “scholarly communications design studio” that would bring humanist scholars into full, equal, and meaningful collaboration with artists, technologists, and librarians. Drawing on past experiences at RRCHNM, especially One Week | One Tool, this new style digital humanities center would put collaboration at the center of its work by moving collaboration upstream in the research and publication workflow. It would bring designers, developers, archivists, editors, students, and others together with humanist faculty members and at the very outset of a project, not simply to implement a work but to imagine it. In doing so, it would challenge and level persistent hierarchies in academic labor, challenge notions of authorship, decenter the faculty member as the source of intellectual work, and bring a divergence of thought and action to the design of scholarly communication.

A planning grant from Mellon in 2015 allowed us to explore these ideas in greater depth. We explored models of collaboration and project design in fields as disparate as industrial design, engineering, theater, and (of course) libraries and digital humanities. We solicited “mental models” of good project design from diverse categories of academic labor including students, faculty members, archivists, artists, designers, developers, and editors. We visited colleagues around the country both inside and outside the university to learn what made for successful and not-so-successful collaboration.

Greenhouse StudiosThe result of this work was a second proposal to Mellon and, ultimately, the launch this week of Greenhouse Studios | Scholarly Communications Design at the University of Connecticut. Starting this year with our first cohort of projects, we will be pioneering a new, inquiry-driven, collaboration-first model of scholarly production that puts team members and questions at the center of research and publication rather than the interests of a particular faculty member or other individual. Teams will be brought together to develop answers to prompts generated and issued internally by Greenhouse Studios. Through a facilitated design process, whole teams will decide the audience, content, and form of Greenhouse Studios projects, not based on any external expectations or demands, but according to their available skills and resources, bounded by the constraints they identify, and in keeping with team member interests and career goals.

Stay tuned to see what these teams produce. In the meantime, after three long years of getting up and running, I plan to be posting more frequently in this space, from my new academic home base, Greenhouse Studios.

Elevator Pitch

Last week I had the pleasure of serving as facilitator at the first Mellon-funded Triangle Scholarly Communication Institute (SCI) in Chapel Hill. For the better part of the week five diverse teams of scholars, librarians, developers, and publishers came together to advance work on projects addressing challenges ranging from data visualization and virtual worlds to providing computational research access to large newspaper collections to building curriculum resources for understanding Sikh religion and culture. It was a great week.

At the end of the event, the teams were each asked to deliver an “elevator pitch” for their project. Quite what this pitch should entail remained something of an open question going into the final day of the Institute, so the project organizers, me included, came up with the following structure, which we shared with the teams the evening before their presentations, on the spot:

  • “The What” What is your project? What needs does it meet or problems does it solve? How does it meet those needs/solve those problems?
  • “The So What?” Why does this project matter? What are its implications for the field of scholarly communication? What are its broader impacts for the way scholarship is produced and disseminated?
  • “The What Next?” What is your plan for implementing your project? What will be the first thing/s you do to advance your project when you leave SCI? How will you maintain working communication between team members in the weeks and months ahead?

It occurs to me that this is a formulation that I have used in many elevator pitches, planning documents, grant proposals, etc. over the years and that it may be useful to others. When you’re trying to convince people to do something, buy something, or support something, these are generally the things they will want to know — What am I buying? Why should I want it? How will you deliver it? Most RFPs, grant guidelines, and the like are variations on this theme. So, when you’re at the early stages of planning a new project, where ever it may end up, this structure may be a useful starting point.

Happy hunting.

What The New Yorker Got Wrong About Lawrence Lessig

In its October 13, 2014 article about Lawrence Lessig’s Mayday PAC, The New Yorker writes:

In 2001, Lessig co-founded Creative Commons, an alternative copyright system that allows people to share their work more freely.

In fact, this isn’t quite right. Creative Commons is not an “alternative copyright system.” It is a licensing regime that uses the existing framework of copyright law to make it easier for copyright holders to release their works under open terms. This is an important distinction in thinking about what Lessig is trying to do with the Mayday PAC, which aims to use the loosening of restrictions on campaign donations that resulted from the Supreme Court’s Citizens United decision to raise millions of dollars specifically in order to elect candidates dedicated to campaign finance reform. The New Yorker titled its piece, “Embrace the Irony,” and the Mayday PAC is indeed an irony. But in the context of Lessig’s earlier work on Creative Commons, it is a familiar one. Both efforts seek to use existing legal frameworks to subvert a status quo those frameworks were intended to support.

Getting into Digital Humanities: A top-ten list

Today I’ll be joining a roundtable discussion hosted by the New York Council for the Humanities for its incoming class of public humanities fellows. I was asked to prepare a “top-ten list” for public humanists looking to get started in digtial humanities, and with the help of friends on Twitter, I came up with the following:

10) Enter the circle (read, tweet, blog)
9) Start with partners
8) Attend THATCamp
7) Write grants, not papers
6) Release early and often
5) Stop worrying about the definition of DH
4) Digital is always public
3) Must. Try. New. Things.
2) Break something
1) Lather, rinse, repeat

Instead of explaining this advice in prose, I decided to put together a video. Here it is.

N.B. As my mother always told me, “do as I say, not as I do.”

Innovation, Use, and Sustainability

Revised notes for remarks I delivered on the topic of “Tools: Encouraging Innovation” at the Institute of Museum and Library Services (IMLS) National Digital Platform summit last month at the New York Public Library.

What do we mean when we talk about innovation? To me innovation implies not just the “new” but the “useful.” And not just the “useful” but the “implemented” and the “used.” Used, that is, by others.

If a tool stays in house, in just the one place where it was developed, it may be new and it may be interesting—let’s say “inventive”—but it is not “innovative.” Other terms we use in this context—”ground breaking” and “cutting edge,” for example—share this meaning. Ground is broken for others to build upon. The cutting edge preceeds the rest of the blade.

The IMLS program that has been charged and most generously endowed with encouraging innovation in the digital realm is the National Leadership Grants: Advancing Digital Resources program. The idea that innovation is tied to use is implicit in the title of the program: the word “leadership” implies a “following.” It implies that the digital resources that the program advances will be examples to the field to be followed widely, that the people who receive the grants will become leaders and gain followers, that the projects supported by the program will be implemented and used.

This is going to be difficult to say in present company, because I am a huge admirer of the NLG program and its staff of program officers. I am also an extremely grateful recipeint of its funds. Nevertheless, in my estimation as an observer of the program, a panelist, and an adwardee, the program has too often fallen short in this regard: it has supported a multitude of new and incredibly inventive work, but that work has too rarely been taken up by colleagues outside of the originating institution. The projects the NLG program has spawned have been creative, exciting, and new, but they have too rarely been truly innovative. This is to say that the ratio of “leaders” to “followers” is out of whack. A model that’s not taken up by others is no model at all.

I would suggest two related remedies for the Leadership Grants’ lack of followers:

  1. More emphasis on platforms. Surely the NLG program has produced some widely used digital library and museum platforms, including the ones I have worked on. But I think it bears emphasizing that the limited funds available for grants would generate better returns if they went to enabling technologies rather than end prodcuts, to platforms rather than projects. Funding platforms doesn’t just mean funding software—there are also be social and institutional platforms like standards and convening bodies—but IMLS should be funding tools that allow lots of people to do good work, not the good work itself of just a few.
  2. More emphasis on outreach. Big business doesn’t launch new products without a sale force. If we want people to use our products, we shouldn’t launch them without people on staff who are dedicated to encouraging their use. This should be refelected in our budgets, a much bigger chunk of which should go to outreach. That also means more flexibility in the guidelines and among panelists and program officers to support travel, advertizing, and other marketing costs.

Sustainability is a red herring

These are anecdotal impressions, but it is my belief that the NLG program could be usefully reformed by a more laser-like focus on these and other uptake and go-to-market strategies in the guidelines and evaluation criteria for proposals. In recent years, a higher and higher premium has been placed on sustainability in the guidelines. I believe the effort we require applicants to spend crafting sustainability plans and grantees to spend implementing them would be better spent on outreach—on sales. The greatest guarantor of sustainiability is use. When things are used they are sustained. When things become so widely implemented that the field can’t do without them, they are sustained. Like the banks, tools and platforms that become too big to fail are sustained. Sustainability is very simply a fuction of use, and we should recognize this in allocating scare energies and resources.

The Dividends of Difference: Recognizing Digital Humanities' Diverse Family Tree/s

Textile, Countryside Mural, 1975In her excellent statement of digital humanities values, Lisa Spiro identifies “collegiality and connectedness” and “diversity” as two of the core values of digital humanities. I agree with Lisa that digital humanists value both things—I certainly do—but it can be hard to *do* both things at the same time. The first value stresses the things have in common. The second stresses the ways we are different. When we focus on the first, we sometimes neglect the second.

This is something that has been driven home to me in recent months through the efforts of #dhpoco (post colonial digital humanities). Adeline and Roopika have shown us that sometimes our striving for and celebration of a collegial and connected (or as I have called it, a “nice”) digital humanities can, however unintentionally, serve to elide important differences for the sake of consensus and solidarity. #dhpoco has made us aware that a collegiality and connectedness that papers over differences can be problematic, especially for underrepresented groups such as women and minorities, especially in a discipline that is still dominated by white men. A “big tent” that hides difference is no big tent at all.

As these critiques have soaked in, they have led me to wonder whether the eliding of differences to advance a more collegial and connected digital humanities may be problematic in other ways. Here I’m thinking particularly of disciplinary differences. Certainly, the sublimation of our individual disciplines for a broader digital humanities has led to definitional problems: the difficulty the field has faced in defining “digital humanities” stems in the first place from people’s confusion about the term “humanities.” Folks seem to get what history, philosophy, and literary criticism are, but humanities is harder to pin down. Just as certainly, calling our work “digital humanities” has made it more difficult for us to make it understandable and creditable in disciplinary context: the unified interdisciplinary message may be useful with funding agencies or the Dean of Arts and Sciences, but it may be less so with one’s departmental colleagues.

But what else is lost when we iron out our disciplinary differences? Our histories, for one.

Most of us working in digital humanities know well the dominant narrative of the pre-2000s history of digital humanities. It is a narrative that begins with the work of Father Busa in the 1950s and 1960s, proceeds through the foundation of the Association for Computers in the Humanities (ACH) in the 1970s and the establishment of the Humanist listserv in the 1980s, and culminates with foundation of the Text Encoding Initiative in the 1990s. Indeed, it is in the very context of the telling of this story that the term itself was born. “Digital Humanities” first came to widespread usage with the publication of A Companion to Digital Humanities, which proposed the term as a replacement for “humanities computing” in large part to broaden the tent beyond the literary disciplines that had grown up under that earlier term. The Companion contains important essays about digital work in history, anthropology, geography, and other disciplines. But it is Father Busa who provides the Foreword, and the introductory history told by Susan Hockey is told as the history of digital textual analysis. Indeed, even Will Thomas’s chapter on digital history is presented against the backdrop of this dominant narrative, depicting history in large part as having failed in its first attempts at digital work, as a discipline that was, in digital terms, passed by in the controversies over “cliometrics” in the 1960s and 1970s.

Let me be clear: I’m not slagging Susan, Will, or the other authors and editors of A Companion to Digital Humanities. Their volume went a long way toward consolidating the community of practice in which I’m now such a grateful participant. If it aimed to broaden the tent, it succeeded, and brought me with it. Nevertheless, as an historian, the story of Father Busa, of Humanist, and even of cliometrics is not my story. It is an important story. It is a story I do not refute. It is a story that should be told. But as a digital historian who isn’t much involved in textual analysis, it isn’t a story I can much identify with. Nor is it the only story we can tell.

tee-rMy story, one I expect will resonate with many of my digital history colleagues, is a story that considers today’s rich landscape of digital history as a natural outgrowth of longstanding public and cultural historical activities rather than a belated inheritance of the quantitative history experiments of the 1960s and 1970s. It is a story that begins with people like Allan Nevins of the Columbia Oral History Office and Alan Lomax of the Library of Congress’s Archive of American Folk-Song, especially with the man on the street interviews Lomax coordinated in the aftermath of the Pearl Harbor attacks. From these oral history and folklife collecting movements of the 1940s and 1950s we can draw a relatively straight line to the public, social, cultural, and radical history movements of the 1960s and 1970s. These later movements directly spawned organizations like the American Social History Project / Center for Media and Learning at the CUNY Grad Center, which was founded in the 1980s—not coincidentally, I might add, by Herb Gutman who was the historical profession’s foremost critic of cliometrics—and the Roy Rosenzweig Center for History & New Media (my former institution), which was founded in the 1990s.

Importantly, these roots in oral history and folklife collecting are not simply institutional and personal. They are deeply methodological. Like today’s digital history, both the oral history and folklife collecting of the 1940s and 1950s and the public and radical history of the 1960s and 1970s were highly:

  1. technological;
  2. archival;
  3. public;
  4. collaborative;
  5. political; and
  6. networked.

Digital humanists often say that particular tools and languages are less important than mindset and method. Our tools are different, but digital historians learned their mindset and methods from the likes of Alan Lomax.

lomax

Thus, from my perspective, the digital humanities family tree has two main trunks, one literary and one historical, that developed largely independently into the 1990s and then came together in the late-1990s and early-2000s with the emergence of the World Wide Web. That said, I recognize and welcome the likely possibility that this is not the whole story. I would love to see this family tree expanded to describe three or more trunks (I’m looking at you anthropology and geography). We should continue to bring our different disciplinary histories out and then tie the various strains together.

In my view, it’s time for a reorientation, for another swing of the pendulum. Having made so much progress together in recent years, having explored so much of what we have in common, I believe the time has come to re-engage with what make us different. One potentially profitable step in this direction would be a continued exploration of our very different genealogies, both for the practical purposes of working within our departments and for the scholarly purposes of making the most of our methodological and intellectual inheritances. In the end, I believe an examination of our different disciplinary histories will advance even our interdisciplinary purposes: understanding what makes us distinctive will help us better see what in our practices may be of use to our colleagues in other disciplines and to see more clearly what they have to offer us.

[Image credits: Smithsonian Institution’s Cooper-Hewitt Museum, Library of Congress, Radical History Review]

Uber and Airbnb

I’m extremely uneasy about startups like Uber and Airbnb whose business models are grounded in sidestepping regulations that were originally intended as consumer- and labor-protection measures. People—both the service providers and their customers—love Uber and Airbnb because they offer greater flexibility and efficiency than traditional taxi and hotel services. Some of that flexibility is afforded by new communications technologies that offer a more direct connection between the service provider and the consumer. But a lot of that flexibility stems from the fact that these services are unregulated. Uber and Airbnb get closer to consumers not only by using information technology to ditch the middleman of the dispatcher (in Uber’s case) or travel agent and hotel chain (in Airbnb’s), but also by ditching the middleman of the government.

We can imagine lots of markets that could be streamlined by using information technology to sidestep middlemen to place service providers in more direct communication with consumers. But the middlemen are often the people who comply with government regulations that are intended to protect us from fraud and abuse. Middlemen often create friction and inefficiencies in the system. But sometimes a little friction is good.

Looks Like the Internet: Digital Humanities and Cultural Heritage Projects Succeed When They Look Like the Network

A rough transcript of my talk at the 2013 ACRL/NY Symposium last week. The symposium’s theme was “The Library as Knowledge Laboratory.” Many thanks to Anice Mills and the entire program committee for inviting me to such an engaging event.

cat When Bill Gates and Paul Allen set out in 1975 to put “a computer on every desk and in every home, all running Microsoft software” it was absurdly audacious. Not only were the two practically teenagers. Practically no one owned a computer. When Tim Berners-Lee called the protocols he proposed primarily for internal sharing of research documents among his laboratory colleagues at CERN “the World Wide Web,” it was equally audacious. Berners-Lee was just one of hundreds of physicists working in relative anonymity in the laboratory. His supervisor approved his proposal, allowing him six months to work on the idea with the brief handwritten comment, “vague, but exciting.”

In hindsight, we now know that both projects proved their audacious claims. More or less every desk and every home now has a computer, more or less all of them running some kind of Microsoft software. The World Wide Web is indeed a world-wide web. But what is it that these visionaries saw that their contemporaries didn’t? Both Gates and Allen and Berners-Lee saw the potential of distributed systems.

In stark contrast to the model of mainframe computing dominant at the time, Gates and Allen (and a few peers such as Steve Jobs and Steve Wozniak and other members of the Homebrew Computing Club) saw that computing would achieve its greatest reach if computing power were placed in the hands of users. They saw that the personal computer, by moving computing power from the center (the mainframe) to the nodes (the end user terminal) of the system, would kick-start a virtuous cycle of experimentation and innovation that would ultimately lead to everyone owning a computer.

Tim Berners-Lee saw (as indeed did his predecessors who built the Internet atop which the Web sits) that placing content creation, linking, indexing, and other application-specific functions at the fringes of the network and allowing the network simply to handle data transfers, would enable greater ease of information sharing, a flourishing of connections between and among users and their documents, and thus a free-flowing of creativity. This distributed system of Internet+Web was in stark contrast to the centralized, managed computer networks that dominated the 1980s and early 1990s, networks like Compuserve and Prodigy, which managed all content and functional applications from their central servers.

This design principle, called the “end-to-end principle,” states that most features of a network should be left to users to invent and implement, that the network should be as simple as possible, and that complexity should be developed at its end points not at its core. That the network should be dumb and the terminals should be smart. This is precisely how the Internet works. The Internet itself doesn’t care whether the data being transmitted is a sophisticated Flash interactive or a plain text document. The complexity of Flash is handled at the end points and the Internet just transmits the data.

480px-Internet_map_1024 In my experience digital cultural heritage and digital humanities projects function best when they adhere to this design principle, technically, structurally, and administratively. Digital cultural heritage and digital humanities projects work best when content is created and functional applications are designed, that is, when the real work is performed at the nodes and when the management functions of the system are limited to establishing communication protocols and keeping open the pathways along which work can take place, along which ideas, content, collections, and code can flow. That is, digital cultural heritage and digital humanities projects work best when they are structured like the Internet itself, the very network upon which they operate and thrive. The success of THATCamp in recent years demonstrates the truth of this proposition.

Begun in 2008 by my colleagues and I at the Roy Rosenzweig Center for History and New Media as an unfunded gathering of digitally-minded humanities scholars, students, librarians, museum professionals, and others, THATCamp has in five years grown to more than 100 events in 20 countries around the globe.

How did we do this? Well, we didn’t really do it at all. Shortly after the second THATCamp event in 2009, one of the attendees, Ben Brumfield, asked if he could reproduce the gathering and use the name with colleagues attending the Society of American Archivists meeting in Austin. Shortly after that, other attendees organized THATCamp Pacific Northwest and THATCamp Southern California. By early-2010 THATCamp seemed to be “going viral” and we worked with the Mellon Foundation to secure funding to help coordinate what was now something of a movement.

But that money wasn’t directed at funding individual THATCamps or organizing them from CHNM. Mellon funding for THATCamp paid for information, documentation, and a “coordinator,” Amanda French, who would be available to answer questions and make connections between THATCamp organizers. To this day, each THATCamp remains independently organized, planned, funded, and carried out. The functional application of THATCamp takes place completely at the nodes. All that’s provided centrally at CHNM are the protocols—the branding, the groundrules, the architecture, the governance, and some advice—by which these local applications can perform smoothly and connect to one another to form a broader THATCamp community.

As I see it, looking and acting like the Internet—adopting and adapting its network architecture to structure our own work—gives us the best chance of succeeding as digital humanists and librarians. What does this mean for the future? Well, I’m at once hopeful and fearful for the future.

On the side of fear, I see much of the thrust of new technology today to be pointing in the opposite direction, towards a re-aggregation of innovation from the nodes to the center, centers dominated by proprietary interests. This is best represented by the App Store, which answers first and foremost to the priorities of Apple, but also by “apps” themselves, which centralize users’ interactions within wall-gardens not dissimilar to those built by Compuserve and Prodigy in the pre-aeb era. The Facebook App is designed to keep you in Facebook. Cloud computing is a more complicated case, but it too removes much of the computing power that in the PC era used to be located at the nodes to a central “cloud.”

On the other hand, on the side of hope, are developments coming out of this very community, developments like the the Digital Public Library of America, which is structured very much according to the end-to-end principle. DPLA executive director, Dan Cohen, has described DPLA’s content aggregation model as ponds feeding lakes feeding an ocean.

As cultural heritage professionals, it is our duty to empower end users—or as I like to call them, “people.” Doing this means keeping our efforts, regardless of which direction the latest trends in mobile and cloud computing seem to point, looking like the Internet.

[Image credits: Flickr user didbygraham and Wikipedia.]