James Mill’s Commonplace Books



I was delighted to see this project to generate an “electronic resource” of James Mill’s common place books, and also enjoyed the quaint phrasing of calling it an “electronic resource,” as in, “James Mill’s Common Place Books— Now with electricity!”

I have had a fondness for commonplace books as a way to organize information and ideas as they come up, which is handy for people like me who frequently want to capture thoughts. In my experience, they utilize an organizing principle of having an index at the front in which you enter a subject (often through an alphabetized list of each consonant followed by all the vowels:


When you have an idea, you its topic in the appropriate place, then simply insert the page number upon which you write things down. In this way, the book can be read chronologically, but also be organized by subject. It seems that Mills did not organize his books this way, however, and so one of the accomplishments of the project was to organize the information.

This project of James Mill’s commonplace books is appreciable in its user-friendliness and transparency of process. There is a clear user’s guide that orients the reader to use, clarifying the structure of the electronic object, but also providing insight to the structure of the original artifact, serving as a pleasant reminder that what the screen shows references a separate object that also exists out of sight. The introduction provides interesting context to why this artifact was chosen to “electronicize.” Of particular interest to me is that one can view the “editing principles” used to translate the book into its electronic form. The object in question is founded in a transcription of the Mill’s books by Prof. Robert Fenn. The editing principles reminds us that its electronic form could have been different, had it been transcribed by someone else, or had different design principles been engaged.

In addition to accessing the contents of the original books, enhanced through the organization of material into topics, this object invites readers to participate in an ongoing organizing scheme through the creation of tags. I value this, again, because it serves to overcome the limitations of the initial coding scheme.

Screen Shot 2015-02-13 at 8.58.08 AM

The interface is rather simple and unimpressive, but also unencumbered and easy to use, which may be helpful to those overwhelmed by too many “intuitive” buttons. Overall, this project is a very interesting case of how to extend the life and relevance of a textual object that is necessarily bounded by materiality and temporality (its chronology of creation). This, on top of the interesting purpose and structure of the original medium, serves as a sort of double look into how we can organize ideas, personally and collectively.

The Temporary Travel Office & Riparian City

For this review, I thought first of the digital, performance, public, etc. projects made by UIUC professor of new media, Ryan Griffis. Ryan uses websites coupled with mini-publications, prints, and other ephemera, as well as performance to question the ways that we define place and interact with the natural world.

When I first started thinking about reviewing Ryan Griffis’ Temporary Travel Office for this assignment, I was hesitant, because, as an artist, I wondered if Ryan’s work would be considered ‘scholarly’ enough for it.  But then I began thinking that most artworks are in fact extra-textual scholarly projects.  Especially in new media, art projects become sites to interrogate intellectual ideas in a more than textual way, often combining text, digital objects, actual objects, performance and events.

Ryan’s work with the Temporary Travel Office http://temporarytraveloffice.net/ is this kind of project, combining art with ecology, environmentalism, critical geography, archival work, and performance.  The project exists as a website, where downloads are available, but also at times as performance or individual ephemeral objects.  The Temporary Travel Office is a fictitious travel agency that aids the public by providing materials and tours to often overlooked areas to disrupt the way we define space and how move around in the physical world.  Parking lots, forgotten parks, and under privileged neighborhoods become the sites of tourism in Ryan’s imagined travel agency.

While the Temporary Travel Office is involved with multiple projects, publications and tours, one that is especially interesting is the again fictitious Riparian City.  Founded in 15,000 BCE, Riparian City is the site of the Doan Watershed in Cleveland.  On the Riparian City Website http://temporarytraveloffice.net/ripariancity/ the public can enter information into a map as a way to map locations or even memories to re-imagine the ways that this neighborhood is defined.

riparian city (800x426)

Riparian City questions not only who defines geographies, but also what.  By creating a Temporary Travel Office in a made up city, Ryan’s work is able to give agency to the space itself, attempting to demonstrate the multitude of forces that shape a sense of place, many that are outside of human control.  And, while projects like this originated with performance pieces composed of a physical embassy, with national flags and passports for Riparian citizens, the remnants still exist online, with images of the imagined city and the editable map that citizens who experience a place defined by local space, people, fauna, resources, and climate can still attempt to picture their city, at least digitally.

The Urban Research Toolkit (URT)

After reading Shannon Mattern’s article, I was interested in her “Urban Research Toolkit”. It is interesting that Mattern stated in his website that “As with most digital projects, this one proved very difficult to preserve.” However, it looks like its outcomes allowed her to create a syllabus on “Urban Media Archeology” where students designed several prototypes.

The Urban Research Toolkit (URT) was an open-source, online mapping platform that offered a new research approach in the humanities, social sciences, and design. Conceived as a tool for education, it considered important the dissemination of a “culture technique” of researching the urban space. More specifically the project developed an interactive map of New York’s historical media infrastructures like telegraphs lines, pneumatic tubes, and telephone networks. Their intention was to demonstrate how those material media landscapes have evolved over the course of the 19th, 20th, and 21st centuries.

A team of designers and programmers at XYZlab at Parsons the New School for Design created the toolkit. This lab focuses on the “collaborative development of participatory design tools for urban research, social networking, community participation and non-linear storytelling”. Their focus on collaboration was central to the project as the interface was conceived not only for displaying information but also to be “ an ideal toolkit for researching, visualizing information and analyzing a range of content.”

The group presented the project in the mobility shifts conference at the New School University in 2011. In a panel on urban mobility and research, they presented the URT as “being developed to maximize the benefit of two primary interfaces – web and mobile“. As a result, they could visualize how contemporaneous networks overlap, complement, or compete with one another.

The success of this project could be contradictory. Even tough there are few traces of its existence (at least on the internet) the theoretical framework has been projected in more than 30 projects developed in the Urban Media Archeology class. As the original project wanted to put the URT both inside and outside the classroom, the original goal was to create thematic layers for project of mapping archival documents or sensing data flows to visualizing urban narratives. As I state before, this goal was achieved through the Urban Media Archeology class. Especially, because the use of available platforms, which ranged from a quilted map to a hand-dissected map to an audio map, allowed to extend the idea to process of prototyping. For that reason, the research was more concerned with the ideas behind the project than with its execution.

I believe that such approaches to interfaces, not only contribute but also open a discussion on how to present results that put theory into the construction of objects. I wonder how the process of research, comparing the original project with is repetition in a class context, also speaks of the time involved in theory production, especially in its continuous and patient labor. Although, The research for each project was constructed and documented in the case of the Urban Media Archeology courses, there are no clear traces of how the students have expanded this work. However, it also demonstrates how theory making can move from one platform to another and, as the process of designing URT, how it emerged from the work of humanist and designers.

One of the Urban Media Archeology projects is “The new bohemia” which seeks to visualize the growth and decay of artist communities in Williamsburg through historical three layers,(1982, 1992 and 2012


Virtual MLK Review

Extra-Textual Project Review: NCSU’s VirtualMLK

I couldn’t pass up the opportunity to look into a project from my own discipline, namely the ever so rare rhetoric Digital Humanities project. The VirtualMLK project is a historical recreation that utilizes 3-D visualization technology to map and recreating the audio from Martin Luther King Jr.’s “A Creative Protest” speech delivered in 1960 in Durham, North Carolina. The project exists in three phases, the audio, the website for circulation, and an immersive lab experience. The project included an installation to experience audio and visual of the speech, a reenactment, and audio and video recordings posted on the website. These recordings allows users to listen to recordings of audio that differs based on where in the Church audience members would have been sitting. Additional resources include historical background, and pedagogical resources based on the project.

As there was no audio record of the speech, the team worked from pamphlets of the speech and hired an audio actor who delivered the speech for the original congregation at White Rock Baptist. They recorded audio from this performance, where audience members were encouraged to participate, as well as from studio recordings. Research is Phase 3 has yet to be completed, but will involved a virtual model in the lab.

A litany of labor (and laborers) is both explicit and implicit in the description of the project. To the former, the VirtualMLK project emerges from collaboration between the Department of Communication at North Carolina State University, the North Carolina Humanities Council and the NCSU libraries, though the degree to which these agencies are involved were unclear. The about page lists Professor Victoria Gallagher as the principal investigator, along with another faculty member and two graduate students as part of the Research Team, though their role is not delineated. “Contributors” include the White Rock Church and three community members. Though not on the same page, the website also leverages the “authenticity” of audience participation in the recordings. The contextual background links to an article from Professor Gallagher, as well as another faculty member and graduate student who are not listed as part of Phase 1. Though the project website emphasizes the audio and visual techniques, reference to research weaves throughout the site but is not as clearly articulated as part of the labor. In general, it seems likely significant research would be need to find artifacts about the speech delivery, the study of other audio recordings of MLK, consultation with community members who were at the speech (as mentioned on the website). This kind of project is far outside the realm of my expertise, and I would be interested in what kinds of more traditional research (and non) would be necessary to attempt a kind of accurate recreation. It is unclear whether any specialists were involved in audio production, the website itself, or the visualizations.

Phase 2 of the project involved the creation of a website to share the audio for “scholars, students, and citizens to explore.” Interestingly though, the project seems to have circulated primarily through regional press. The speech they chose was delivered locally and phase three focuses on creating a lab space to create an “immersive experience” through which to interact with the speech aimed at local community members and students. Despite the digital component, the primary audience seems to be a substantially local one. The project has also involved public events for the local audience.

In the most simplistic sense, projects like VirtualMLK demonstrate a kind of success in use. The local emphasis might allow for a pedagogical use for student field trips, etc. In a more ambitious sense, projects like this have some potential to push forward our understanding of historical public address in interesting ways. The project is interesting for its capacity both to function as a tool for community and pedagogical outreach on a significant figure in the field, but also for what it might tell us about delivery in a historical event. The project coordinators emphasize the historical significance of the speech—for his endorsement of non-violent direct action, the local significance to the Church, and the potential it held to explore digital methods. Trying to merge digital humanities and rhetorical research questions is an explicit part of how the project is framed. Though the project and author escape me at the moment (help Paul!), there is another project that proceeded focused on a kind of digital public address that represents a strictly digital recreation of the visual and audio of a Roman forum. The common narrative about this project is that what it reveals is very few people would have actually been able to hear a speaker.

Save the Humanities Bot

Hi all! Sveta here. For my review assignment, I chose to take a look at Mark Sample’s Save the Humanities Bot.

Save the Humanities (@SaveHumanities) is a Twitter bot that, according to its bio, provides “Daily tips on how to stop the crisis in the humanities. Real solutions!” Here are several sample tweets from the past couple of weeks (the first one being the most recent post). Every tweet begins “To save the humanities we need” and finishes with the second half of an “I need” statement culled at random from a different public Twitter feed. (I am pretty sure, but not 100% certain that that is the source material for the second half of each post.) This produces results that are mostly ridiculous, but occasionally hint at something more profound. Often times, they inadvertently reference current events – as is the case with the second example below, which is referring to the Brian Williams scandal.

The bot is a response to the numerous think pieces about the relevance of the humanities (and perhaps also to the rhetoric around digital humanities as a solution to a crisis in the humanities). At the time of writing, Save the Humanities bot has 876 Twitter followers. Unsurprisingly, the majority of them seem to be  graduate students with a professed interest in digital humanities or related fields.

This being a bot, tweets are generated and sent automatically, according to an algorithm established by the bot’s creator. In this case, that person is Mark Sample, Associate Professor of Digital Studies at Davidson College (North Carolina). Sample makes a lot of Twitter bots – he’s also behind @FavThingsBot, @WhitmanFML, @blackboughbot and others. The practice of bot-making, which Sample characterizes as “creative coding” seems to be a considerable part of his broader academic project. He has written about the practice of creating bots as a path into theorizing digital media (here’s a blog post on this). Because Sample is a tenured professor and this work is part of his scholarship, his labor in relation to such projects is compensated as a part of his formal job description. I would venture to guess, although I can’t say for certain, that he has more freedom to focus on these extra-textual forms of scholarship now that he is based in a department of Digital Studies (as opposed to a prior appointment in English).

There is not a formal editorial process behind @SaveHumanities, as the content is generated at random. However, the content of the bot’s tweets are not necessarily how its success is measured. Rather, its utility lies in the process of “creative coding” and the way that process contributes to Sample’s theory-making. Additionally, in recent essay on protest bots (where he makes an analogy between protest bots and protest songs), Sample argues that the success of certain kinds of bots lies in their ability to, quoting Adorno, “present society a bill it cannot pay.” That is, that (protest) bots can potentially act as social critics. Although the extent to which this is true is certainly up for debate, @SaveHumanities and the practice of “creative coding” certainly offers a kind of cultural technique, or model for further inquiry. In fact, Sample is not the only academic whose scholarship incorporates the practice.

I cannot imagine something like the @SaveHumanities bot making a substantive contribution in LIS scholarship in the traditional sense. Unlike in a field like Digital Studies, I don’t necessarily see bot-making leading into theory-making. (Although, never say never, I guess.) I do however, see a place for this kind of work into the kind of digital pedagogies some in LIS are engaged with, especially around digital literac[ies]. Making Twitter bots –even utterly silly ones – is a way to get hands-on coding experience. Because they re-purpose something that was originally created for a different purpose (automated customer service, targeted advertising, etc.) the process of making one’s own bot could be a lens through which to reflect on computational culture more broadly. Although making a bot that collects and aggregates information in real time (like @SaveHumanities) is more complicated, building a Twitter bot based on a predetermined corpus is actually relatively simple.  To start, take a look at this tutorial post from Robin Davis, an Emerging Technologies Librarian at CUNY.

Media Commons Journals



My first impulse was to look at In Media Res, an online academic alternative publication that I often assign to my students as a preservation mini-project. In following the links from some of the sites discussed in class last week, I was reminded of its sister project, The New Everyday, because some of the people associated with the maker websites have published ‘clusters’ there. These two publications are both part of the larger Media Commons project, itself part of, or at least an offshoot of, the Institute for the Future of the Book. According to its own text on its website, the Institute for the Future of the Book is a small think tank that aims to both ‘chronicle’ the shift from print to screen as well as to drive it in positive directions, whatever that might mean. One of its founding members is Bob Stein, well known for his inventive projects that predated the e-books we know today.

The audience for the two journals mentioned, In Media Res and The New Everyday, is predominantly academic. TNE publishes journal-like articles in clusters, and functions as a blog/journal hybrid. Academic posts are open to commentary from both other scholars as well as the public, in principle. This is meant to serve as a form of public and transparent peer review. IMR works like a piece of time-based media art. A weekly topic is suggested by a scholar, who acts as curator, and 5 articles are produced around this topic. Each must feature a media clip. The articles are released one a day for a week beginning on a Monday, so the entire series isn’t viewable until Friday. As with TNE, there is space for commentary on each article. Both are available through websites hosted by NYU Libraries via the Media Commons project.

The users and creators end up being the primary audience, although the sites are publicly available online. In this case, the project is partly designed to provide a platform in which scholars can exercise and demonstrate curatorial skills with the hope that these might some day be considered for decisions such as promotion and tenure. The cluster or weekly curator is supposed select and arrange the content, both by preplanning and finding authors to contribute, but also by ostensibly arranging and highlighting the commentary that is posted in a way to further discussion and deepen understanding. In this case, the labour is primarily on the shoulders of the scholars—they write the articles, curate the sites, and often provide the majority of the commentary. Since the commentary is supposed to act as a form of peer review, they contribute their labor in this way as well.

I think the fact that these are still being actively used and maintained constitutes a success in its own right, as I first became aware of these after they were already up and running in 2010. Their original purported goals were to make inroads in changing the kinds of criteria on which scholars were judged, and I think this change has been slow in coming, if it is even coming yet at all. That is precisely where this type of project fits within the larger discourses about maker and other creative spaces that seek to highlight and encourage alternative modes of scholarly production.

In both cases, the cultural technique that is being promoted is this idea of curation—that careful arrangement renders a collection more than the sum of its parts. Both journals seek to market this as skill that should be recognized and rewarded.   I don’t know that I would consider either of these journals to be precisely in my field; I know that I have not come across the level of saturation with these journals that would indicate the critical mass of users necessary for this cultural technique to take hold in any particular field of discipline. People in LIS don’t know them; neither did people in media studies classes.

I attended a conference a couple weeks ago, whose theme was ‘teaching social justice’. One of the recommendations, from a panel on teaching social justice as junior faculty, was this idea that when working in alternative spaces, it is incumbent upon you to present work that is thoroughly and transparently rigorous. This is necessary as a sort of first line defense against mainstream voices who might otherwise dismiss the work you are doing– the old adage that you have to do things 10 times better to be considered just as good. It seems to me that in this alternative space of publication, this is very true and I wonder what such rigor would look like.   In both projects, the articles are by nature brief. The extra substance is designed to come from the interaction and commentary. But in order for this to be substantive enough that someone could make the case that such contributions be considered on the level or order of a traditionally peer-reviewed journal piece, the commentary would have to be fairly meaty with a number of comments and responses and interchanges with other scholars or public figures who could demonstrate their own expertise or credentials such that their additions to the conversations could be seen as real alternatives to traditional peer review. In my head, the way to do this is via critical mass, and saturation within a discipline to accumulate the participation of necessary voices and to encourage them to expend the time and labour on something that isn’t yet seen as valuable in the same way other uses of a scholar’s time might be.  I don’t know how to measure this or engender it.