Check out these amazing WPA-style posters created by the Department of Energy to mark the infrastructure achievements made possible under the 2009 stimulus bill. I hope this time around, the government doesn’t wait 10 years to start selling the infrastructure and climate bills that passed earlier this year.
Two takes on this year’s tech industry crash: The first, from Derek Thompson, is cultural (the crash is big tech’s “midlife crisis”). The second, from Matt Yglesias, is financial (higher interest rates are making speculation in technology relatively less attractive).
Steven Johnson on the importance of the cassette tape and the way it changed both the sound and the business of music—in many of the same ways that another low-fidelity technology, the mp3, did.
Finally, if you have been wondering what Post.news is, how it’s different from other social networks, and especially how it plans to make money, here’s a primer from Neiman Journalism Lab.
I recently relistened an interview Ezra Klein did with Danielle Allen (Harvard Edmond J. Safra Center for Ethics) in 2019, in which they discuss how science, technology, and business differ fundamentally from politics because the former disciplines assume a set of values that are already ordered by priority (efficiency, profit, etc.) but politics is essentially all about the setting and the reordering of those values. That’s why engineering and STEM have a hard time “fixing” politics and a hard time “solving” more human questions (and perhaps even why STEM majors vote in much smaller numbers than humanities majors).
This is something the pandemic has thrown into sharp relief in the years since Klein and Allen’s conversation. On one level, STEM can “fix” the pandemic by giving us miracle vaccines. But that’s only if we assume a set of values that are held in common by the populace (the health of the community, safety, trust in expertise, etc.) If the values themselves are at issue, as they are surrounding COVID-19, then STEM doesn’t have much to offer, at least for those communities (red state voters, anti-vaxxers) whose values diverge from those assumed by STEM.
This suggests, as Allen argues, that we need to rebalance the school curriculum in favor of humanities education, including paying a greater attention to language (the primary toolkit of politics) and civics. It also suggests the need for more humanities within the STEM curriculum—not just the three-credit add-on ethics courses that characterize engineering programs and medical school, but a real integration of humanities topics, methods, and thinking as part of what it means to “know” about STEM.
This is, of course, something that’s especially appealing to me as a historian of science, but it’s something that should be just as appealing to engineers, who like to frame their work as “problem solving.” If STEM really wants to solve the big problems facing us today, it is going to have to start further back, to solve for more than just technical questions, but also for the values questions that increasingly precede them.
A rough transcript of my talk at the 2013 ACRL/NY Symposium last week. The symposium’s theme was “The Library as Knowledge Laboratory.” Many thanks to Anice Mills and the entire program committee for inviting me to such an engaging event.
When Bill Gates and Paul Allen set out in 1975 to put “a computer on every desk and in every home, all running Microsoft software” it was absurdly audacious. Not only were the two practically teenagers. Practically no one owned a computer. When Tim Berners-Lee called the protocols he proposed primarily for internal sharing of research documents among his laboratory colleagues at CERN “the World Wide Web,” it was equally audacious. Berners-Lee was just one of hundreds of physicists working in relative anonymity in the laboratory. His supervisor approved his proposal, allowing him six months to work on the idea with the brief handwritten comment, “vague, but exciting.”
In hindsight, we now know that both projects proved their audacious claims. More or less every desk and every home now has a computer, more or less all of them running some kind of Microsoft software. The World Wide Web is indeed a world-wide web. But what is it that these visionaries saw that their contemporaries didn’t? Both Gates and Allen and Berners-Lee saw the potential of distributed systems.
In stark contrast to the model of mainframe computing dominant at the time, Gates and Allen (and a few peers such as Steve Jobs and Steve Wozniak and other members of the Homebrew Computing Club) saw that computing would achieve its greatest reach if computing power were placed in the hands of users. They saw that the personal computer, by moving computing power from the center (the mainframe) to the nodes (the end user terminal) of the system, would kick-start a virtuous cycle of experimentation and innovation that would ultimately lead to everyone owning a computer.
Tim Berners-Lee saw (as indeed did his predecessors who built the Internet atop which the Web sits) that placing content creation, linking, indexing, and other application-specific functions at the fringes of the network and allowing the network simply to handle data transfers, would enable greater ease of information sharing, a flourishing of connections between and among users and their documents, and thus a free-flowing of creativity. This distributed system of Internet+Web was in stark contrast to the centralized, managed computer networks that dominated the 1980s and early 1990s, networks like Compuserve and Prodigy, which managed all content and functional applications from their central servers.
This design principle, called the “end-to-end principle,” states that most features of a network should be left to users to invent and implement, that the network should be as simple as possible, and that complexity should be developed at its end points not at its core. That the network should be dumb and the terminals should be smart. This is precisely how the Internet works. The Internet itself doesn’t care whether the data being transmitted is a sophisticated Flash interactive or a plain text document. The complexity of Flash is handled at the end points and the Internet just transmits the data.
In my experience digital cultural heritage and digital humanities projects function best when they adhere to this design principle, technically, structurally, and administratively. Digital cultural heritage and digital humanities projects work best when content is created and functional applications are designed, that is, when the real work is performed at the nodes and when the management functions of the system are limited to establishing communication protocols and keeping open the pathways along which work can take place, along which ideas, content, collections, and code can flow. That is, digital cultural heritage and digital humanities projects work best when they are structured like the Internet itself, the very network upon which they operate and thrive. The success of THATCamp in recent years demonstrates the truth of this proposition.
Begun in 2008 by my colleagues and I at the Roy Rosenzweig Center for History and New Media as an unfunded gathering of digitally-minded humanities scholars, students, librarians, museum professionals, and others, THATCamp has in five years grown to more than 100 events in 20 countries around the globe.
How did we do this? Well, we didn’t really do it at all. Shortly after the second THATCamp event in 2009, one of the attendees, Ben Brumfield, asked if he could reproduce the gathering and use the name with colleagues attending the Society of American Archivists meeting in Austin. Shortly after that, other attendees organized THATCamp Pacific Northwest and THATCamp Southern California. By early-2010 THATCamp seemed to be “going viral” and we worked with the Mellon Foundation to secure funding to help coordinate what was now something of a movement.
But that money wasn’t directed at funding individual THATCamps or organizing them from CHNM. Mellon funding for THATCamp paid for information, documentation, and a “coordinator,” Amanda French, who would be available to answer questions and make connections between THATCamp organizers. To this day, each THATCamp remains independently organized, planned, funded, and carried out. The functional application of THATCamp takes place completely at the nodes. All that’s provided centrally at CHNM are the protocols—the branding, the groundrules, the architecture, the governance, and some advice—by which these local applications can perform smoothly and connect to one another to form a broader THATCamp community.
As I see it, looking and acting like the Internet—adopting and adapting its network architecture to structure our own work—gives us the best chance of succeeding as digital humanists and librarians. What does this mean for the future? Well, I’m at once hopeful and fearful for the future.
On the side of fear, I see much of the thrust of new technology today to be pointing in the opposite direction, towards a re-aggregation of innovation from the nodes to the center, centers dominated by proprietary interests. This is best represented by the App Store, which answers first and foremost to the priorities of Apple, but also by “apps” themselves, which centralize users’ interactions within wall-gardens not dissimilar to those built by Compuserve and Prodigy in the pre-aeb era. The Facebook App is designed to keep you in Facebook. Cloud computing is a more complicated case, but it too removes much of the computing power that in the PC era used to be located at the nodes to a central “cloud.”
On the other hand, on the side of hope, are developments coming out of this very community, developments like the the Digital Public Library of America, which is structured very much according to the end-to-end principle. DPLA executive director, Dan Cohen, has described DPLA’s content aggregation model as ponds feeding lakes feeding an ocean.
As cultural heritage professionals, it is our duty to empower end users—or as I like to call them, “people.” Doing this means keeping our efforts, regardless of which direction the latest trends in mobile and cloud computing seem to point, looking like the Internet.
Yesterday I received a letter from Google addressed to Robert T. Gunther at Found History. As founder of the Museum of the History of Science at Oxford, where I did my doctoral work, and a major figure in my dissertation, I am very honored to welcome Dr. Gunther to the Found History staff. Despite having passed away in 1940, it is my hope that Dr. Gunther will make significant contribution to this blog’s coverage of the history of scientific instrumentation.
During a discussion of e-book readers on a recent episode of Digital Campus, I made a comparison between Amazon’s Kindle and Apple’s iPod which I think more or less holds up. Just as Apple revolutionized a fragmented, immature digital music player market in the early 2000s with an elegant, intuitive new device (the iPod) and a seamless, integrated, but closed interface for using it (iTunes)—and in doing so managed very nearly to corner that market—so too did Amazon hope to corner an otherwise stale e-book market with the introduction last year of its slick, integrated, but closed Kindle device and wireless bookstore. No doubt Amazon would be more than happy with the eighty percent of the e-book market that Apple now enjoys of the digital music player market.
If these entries into the e-book market are successful, it may foretell of a more open future for e-books than has befallen digital music. It would also suggest that the iPod model of a closed, end-to-end user experience isn’t the future of computing, handheld or otherwise. Indeed, as successful and transformative as it is, Apple’s iPhone hasn’t been able to achieve the kind of dominance of the “superphone” market that the iPod did of the music player market, something borne out by a recent report by Gartner, which has Nokia’s Symbian and Android in first and second place by number of handsets by 2012 with more than fifty percent market share. This story of a relatively open hardware and operating system combination winning out over a more closed, more controlled platform is the same one that played out two decades ago when the combination of the PC and Windows won out over the Mac for leadership of the personal computing market. If Sony, Barnes & Noble, and other late entrants into the e-book game finish first, it will have shown the end-to-end iPod experience to be the exception rather than the rule, much to Amazon’s disappointment I’m sure.
Jessica Pritchard of the American Historical Association blog reports on a panel at last month’s annual meeting that asked what it takes to be a public historian. Entitled “Perspectives on Public History: What Knowledge, Skills, and Experiences are Essential for the Public History Professional?” the panel was chaired by George Mason’s own Spencer Crew.
Going back a bit to the December issue of Code4Lib Journal, Dale Askey considers why librarians are reluctant to release their code and suggests some strategies for stemming their reluctance. I have to say I sympathize completely with my colleagues in the library; I think the entire Omeka team will agree with me that putting yourself out there in open source project is no easy feat of psychology.
John Slater, Creative Director of Mozilla, rightly notes that, however unlikely, t-shirts are important to the success of open source software. In his T-Shirt History of Mozilla, Slater shows us 50 designs dating back to late 1990s.
Timelines.tv presents 1000 years of British history through a series of film clips organized along three parallel and interlinked timelines, one each for social, political, and national (English, Irish, Welsh, Scottish) history. Very high quality content (originally filmed for the BBC) distributed in a very popular format (the timeline). And a pretty slick website to boot.
Open Source Decade. Ars Technica recalls Tim O’Reilly’s 1998 “Freeware Summit” where “open source” first emerged as a term of choice in the free, open, libre, etc. software movement.