About Me

My photo
I am Professor of Digital Humanities at the University of Glasgow and Theme Leader Fellow for the 'Digital Transformations' strategic theme of the Arts and Humanities Research Council. I tweet as @ajprescott.

This blog is a riff on digital humanities. A riff is a repeated phrase in music, used by analogy to describe a improvisation or commentary. In the 16th century, the word 'riff' meant a rift; Speed describes riffs in the earth shooting out flames. The poet Jeffrey Robinson points out that riff perhaps derives from riffle, to make rough.

Maybe we need to explore these other meanings of riff in thinking about digital humanities, and seek out rough and broken ground in the digital terrain.

24 April 2012

Dirty Books, Densitometry and the Digital Humanities



I am immensely grateful to Eric Kwakkel of Leiden University for drawing my attention via Twitter at the weekend to an important piece of recent work which to my mind provides a model for the sort of innovation we should be developing in the digital humanities. It is a completely experimental approach which doesn't produce a sustainable digital resource, raise questions about standards or encourage us to integrate data in different fashions, but it is more provocative and thought-provoking than a thousand lavishly-funded TEI online editions. Of course, there is room within the big tent of the digital humanities for all such approaches, but my anxiety is that the Digital Humanities, as it grows increasingly complacent, inward-looking and risk-averse, will lose touch with this kind of avowedly experimental work, which was perhaps more commonplace fifteen years ago than it is now.

The BBC story which Erik posted described a piece of research by the St Andrews art historian Kathryn Rudy under the headline 'Secrets Revealed by Dirty Books from Medieval Times' and suggested that measurement of dirt on medieval manuscripts could indicate which pages were most frequently handled by their medieval owners. My immediate reaction was to feel doubtful about the validity of such an approach, knowing how frequently major libraries frequently clean manuscripts. However, reference to the full article, published in the Journal of Historians of Netherlandish Art and available from the St Andrew's Institutional Repository here, revealed a much more subtle and important piece of research. It is common in medieval manuscripts to see how oil and dirt from constant handling discolours certain pages.  Dr Rudy used a device called a densitometer which measures the reflectivity of a surface in a way that will not damage the manuscript. As pages are handled, then the surface of the vellum becomes darker. Densitometer readings will in theory indicate which pages were most frequently handled:

     
Dr Rudy offers fascinating analyses of the way in which the densitometry data provides evidence of how different owners of particular manuscripts made use of them, and in particular which sections of the manuscript they read most often. Securing this information was not easy - Dr Rudy used as densitometer on about 200 manuscripts, but only got useful information on 10% of them. As I suspected, one of the main problems is modern cleaning of manuscripts. Large institutions such as the British Library and the Victoria and Albert Museum have historically tended to clean the surface of manuscript pages at the same time as rebinding or repairing them, and the huge swathes of rebinding of medieval and other manuscripts which caused such immense damage and loss of evidence in the British Library up to the 1970s also destroyed evidence which could now be explored by the densitometer. Historically, as the conservation wiki notes, bread crumbs were often used for this surface cleaning. The Conservation Wiki gives the following advice for baking bread for use in cleaning your manuscripts: 'Bread has been historically used as a surface cleaning material, but is no longer in general use. Bread should be baked without oils, yeast, or (potentially abrasive) salt. (SD) Traditionally, day old bread was preferred, as it was not as moist as fresh bread and may have had “tooth” to facilitate better cleaning. Crusts were removed and the bread was pressed into the paper surface with a rolling motion. (EO) Residual bread may support mold growth. (RA)'.

Another issue in the use of densitometers with manuscripts not noted by Dr Rudy is that modern usage of manuscripts also causes discoloration. In a volume which contains a number of different medieval codices bound together, it often striking how a well known section which has received a great deal of scholarly attention is very seriously discoloured, whereas a less well-known part of the manuscript is much cleaner. Because of the way in which the reconstruction of the Cotton Library in the British Library was undertaken and documented, there are two or three medieval manuscripts which are shown in all the catalogues as destroyed but which were in fact restored and have been preserved. These manuscripts have not been touched by more than half-a-dozen people since they were restored in the middle of the nineteenth century. It is striking how very much cleaner these volumes are than those Cotton Manuscripts which have been regularly consulted in the Manuscripts Reading Room during that time.

There is clearly a great deal to do in developing this new method of manuscript densitometry, and this is a task which should be taken up by scholars working in the digital humanities. Nevertheless, the scholarship of Dr Rudy's first experimental use of this technique is very striking and it is difficult to disagree with Dr Rudy's claim that 'We can add densitometrical analysis to the manuscript scholar's toolbox of forensic techniques, which also includes the use of ultraviolet (UV) light or other techniques to help to disclose texts that have been scratched out'. The potential value of densitometry is not restricted to manuscripts. It would be interesting to compare how different owners approached the same copy of a book by analysing some early printed books. Or we could take a library like that of Thomas Jefferson or Edward Gibbon, and analyse which books they were most interested in. There is a huge new potential field of investigation here.

At the end of her excellent article, Dr Rudy enters an important plea: 'As we listen to the last gasp of the physical book, it is important to think about this material evidence and what it represents. What we have to gain by digitization and by abandoning the book as a physical object may be negated by what we have to lose'. She goes on:  make a similar plea that, as libraries continue to digitize medieval illuminations, they continue to grant access to the physical objects, which always hold more evidence than we first perceive. The Koninklijke Bibliotheek in The Hague, which preserves many of the examples taken up in this study, for example, has been in the forefront of digitizing images from its illuminated manuscripts, but at the same time has reduced the opening hours of its reading rooms. But they have done so partly because the reading rooms are frequently empty. It would seem that manuscript historians are largely content to study a digital copy from home if it exists. The convenience of digital facsimiles might be heralding the end of codicological approaches to manuscript studies. This is lamentable, as there is much subtle information stored in the physical object'.

This is a real challenge, and scholars working in the digital humanities must wonder how far, in their naive techno-enthusiasm, they are culpable here. By giving us new means of exploring and investigating cultural artefacts such as books and manuscripts, digital technologies made access to and engagement with original objects more and not less important. Yet too often scholars working in the digital humanities give out the message that what counts is data and information, and that this can somehow be investigated in a fashion disconnected from its physical roots. This is a route to a major cultural disaster. We may throw up out hands in horror at the Victorian and early twentieth century destruction of bindings and other aspects of medieval manuscripts, but the digital humanities is actively colluding in encouraging approaches which are potentially equally destructive. We can help avert this looming disaster by showing how digital technologies give us more tools to engage with the original manuscript and printed book, and by leading a renewed engagement with books and manuscripts in library and archive reading rooms. The slogan of many librarians in the 1990s was 'access not collections'. Practitioners of the digital humanities should aim to replace this with 'collections and access'.  
 

Read more »

16 April 2012

Geo600: Gravity's Rainbow


I have a worry that this blog could start to assume a very elderly and curmudgeonly tone, and that I will start to establish myself as a sort of digital Victor Meldrew. I certainly feel that it is one of the roles of the digital humanities scholar to try and counter the kind of puppyish techno-enthusiasm which seems to believe that Twitter (or Tumblr or Instagram or whatever is next) can solve the problems of humanity. The digital humanities should be a means by which more rigorous critical and theoretical perspectives can be brought to bear on our engagement with the changing digital world. We should want to own an iPad and feel that we can make use of it (strongly yes on both counts for me), but we should also recognize that as a cultural, political and social object, the iPad raises lots of very challenging questions as to how knowledge will be controlled, commodified and mediated in the future. However, in developing such critical perspectives, there is a risk of losing one’s enthusiasm for innovation. In its earliest days, humanities computing was notable for the way in which it was constantly pushing forward the envelope and trying new things. I wonder whether we have lost something of that spirit.

I was prompted to reflect on the importance of innovation by a fascinating article in yesterday’s newspaper about a remarkable project to detect gravitational waves. The newspaper described the project as Anglo-German, but two of the main collaborators in Geo600 are based in the Physics Department at my former home of the University of Glasgow, and I wish I had known about the project while I was at Glasgow, because I would have beaten a path straight to its door.

I’ll try to summarise the project which, since I only scraped an O-level in Physics, will be, I'm sure, an inept and crude account. Einstein proposed that big stellar events like supernovae send out gravitational waves which sweep through the universe. However according to Einstein these waves would be so weak that it would be impossible ever to detect them. Thus, when light from the supernova explosion that formed the Crab Nebula reached the earth in 1054, at about the same time a gravitational wave would also have reached the earth, but its effect would have been barely perceptible.

The Geo600 project is attempting to achieve what Einstein thought impossible – to measure the impact of these gravitational waves. In order to do so, it is necessary to design and construct incredibly sensitive detectors,  capable of detecting changes which would cause the detector to move by only a few hundred billion-billionths of a metre. The detectors that have been built are so sensitive that they show the effect of the waves pounding on the beach fifty miles away, or will be affected by the gravitational pull of a person walking past. If these detectors are successful, they will prove Einstein right in predicting the existence of gravitational waves, but wrong in thinking that these waves could never be detected. Verifying an important aspect of the Speial Theory of Relatively is clearly valuable enough as a scientific outcome, but the Geo600 project proposes also completely to transform the nature of astronomy. To quote Professor Jim Hough at Glasgow: ‘We are going to create a completely new kind of astronomy…  Until now, everything we have learned about the universe has been based on studies of electromagnetic radiation – from infrared to visible light to gamma ray detection. Gravity waves will create a completely new type of astronomy’.

The humble humanities scholar may feel that she or he will never need or want to develop such ambitious projects. But reading the article on gravitational waves made my mind run back to a quotation from Charles Babbage, the Victorian pioneer of computing, that I used in a talk at the University of Kentucky in 1995. Here’s what I said then:

"In the Ninth Bridgewater Treatise, Babbage pointed out how a knowledge of mechanical laws gives you a different view of the world. When you speak, the waves spread out, gradually losing strength and impetus, but still remaining, until the only trace is perhaps in the movement of molecules, but still there. Likewise, the cries of a drowning man would create sound waves which would spread out through the water, until only the water atoms retained the impression of them. With a sufficiently powerful computer, Babbage speculated, you might be able to detect those faint traces and recover the last words of the dying man. This would be true of any object - the Beowulf manuscript would retain the faint impression of the conversations the scribe had while writing it. That is the meaning of the phrase I suggested to Kevin as a motto for this conference, and which Ackroyd also uses in his novel: 'Every atom, impressed with good and with ill, retains at once the motions which philosophers and sages have imparted to it, mixed and combined in ten thousand ways with all that is worthless and base. The air itself is one vast library, on whose pages are for ever written all that man has ever said or woman whispered.' Babbage's notion that you could recapture those words must have seemed bizarre in 1837, but in these days of chaos theory it seems less strange. Perhaps one day we will hear the Beowulf scribe speaking. There is certainly a challenge there which I think we should take up".

This image that ‘the air itself is one vast library’ was also taken up, I noticed, by James Gleick in The Information. If we can detect gravitational waves, can’t we also detect sound waves from the past, and open up the vast library in the air? Is it really so impracticable? Isn’t this the kind of innovation that the digital humanities should be working with physicists and other scientists to take forward? It seems that we don’t develop visionary research in the humanities on the same scale as in the sciences.  It is this kind of visionary research, the ‘big humanities’, that scholars in the digital humanities should be arguing the case for.    

Postscript 17 August 2012

The idea that in some way sounds of the past can be recaptured from the air occurred to others apart from Babbage. Friedrich Kittler's challenging and celebrated work, Gramophone, Film, Typewriter reproduces  Salomo Friendlaender's short story, 'Goethe Speaks into the Phonograph' (1916). This takes the idea that the air in a room where Goethe once spoke would still retain the impression of the waves generated by his voice, but adds a grotesque aspect by suggesting that, in order to recreate Goethe's voice, it would be necessary for the airwaves to be directed across Goethe's vocal chords (which fortunately had been preserved after his death). For Kittler, the idea of recapturing these historic sound waves from the air reflected the awareness of sound waves created by the discovery of the gramophone. If the abiding image of the digital is the binary opposition of one/off, the gramophone reflected the triumph of the analogue unit of the wave. In a sense, this idea of recapturing the past from soundwaves in the air might be seen as an analogue fantasy.        
  

Read more »

7 April 2012

To Code or Not to Code?


Easter 1982 – thirty years ago! – was spent feeding my latest addiction. Like over a million others, I had acquired the Sinclair ZX 81, which popularised home computing in Britain. It had just one kilobyte of on-board memory; I soon invested in the upgrade to take it up to 16 kilobytes. You used your television as the monitor, and loaded the programmes from audio cassette tapes. My love affair with Sinclair only came to an end when even more awesome Amstrad PCW came along a few years later. Indeed, checking references just now, I came across the Sinclair ZX81 emulators, and began to feel some of the old passion stirring.

In order to get the Sinclair to do anything, you had to programme in the Sinclair flavour of Basic. Even to get a word to display on your television, you had to write and run a short programme. For some, this was a problem. One of the reasons why the BBC decided to use Acorn computers rather than the Sinclair machines to promote computer literacy in schools was that the producer of the BBC series The Computer Programme, Paul Kriwaczek, ‘did not believe that the future of computers lies in everyone learning to program in BASIC’. Yet, for me and I suspect many others, it was precisely the programming that was so fascinating about the Sinclair. As you sought to develop a programme that would, say, enable you to do some primitive word processing, the hours and days would disappear as you played with variables and loops. I became obsessed with trying to produce a programme to calculate the date of Easter. Dates in medieval documents are generally given by reference to religious festivals. Dating medieval documents involves cross-checking tables in a Handbook of Dates. This is obviously a process that can be automated and calculating the date of Easter would be a first step towards this. I honestly believed, in a fit of youthful delusion, that somehow I could produce an automated Handbook of Dates on a Sinclair ZX81. Of course, I was unsuccessful; amazingly, there still doesn’t seem to be an automated version of the Handbook of Dates online. I gave up when I realized how much time my addiction to Basic programming was consuming – I am convinced that I would have completed my PhD thesis two years earlier if I hadn’t purchased a Sinclair ZX81. I realized that I was spending all my time becoming a low-end computing hobbyist whereas I should be concentrating on becoming a reasonably accomplished historian.

My experience with the Sinclair ZX81 perhaps prefigures the debate which is still an active one within the digital humanities – namely the extent to which practitioners of the digital humanities should be hands-on programmers and the level of hands-on computing engagement we should expect from scholars of the digital humanities. Stephen Ramsay’s now celebrated intervention at the 2011 MLA ‘Who’s In and Who’s Out’ , refined by a subsequent post, ‘On Building’, argued that the creation of digital objects of all types should be a fundamental concern of practitioners of the digital humanities. Ramsay points out that humanities scholars are familiar with theorizing (say) maps as cultural artefacts, but that the experience of mapping in GIS gives new perspectives. He argues that ‘Building is, for us, a new kind of hermeneutic – one that is quite a bit more radical than taking the traditional methods of humanistic inquiry and applying them to digital objects. Media studies, game studies, critical code studies, and various other disciplines have brought wonderful new things to humanistic study, but I will say (at my peril) that none of these represent as radical a shift as the move from reading to making’.

The anxieties expressed in the discussion of Ramsay’s blog posts echo through the recent volume of Debates in the Digital Humanities (an extraordinarily Americo-centric volume for a discipline which claims to be highly collaborative and international in its scope and outlook). Indeed, Ramsay can be seen as anticipating recent wider arguments in Britain that coding should receive more attention in schools. Last Saturday, John Naughton launched in the Guardian a manifesto for teaching computer science in schools which emphasized the learning of code in a way that must have gladdened the heart of Sir Clive Sinclair. Indeed, the Raspberry Pi seems to take us back to the days of the ZX81, and has already proved very successful in making children understand how the digital devices which pervade their lives work. In my recent article in Arts and Humanities in Higher Education, I argued that it is essential for humanities scholars to become more than mere consumers of digital resources. If this is to be achieved, some understanding of the nuts and bolts of such resources is essential.

But does this mean that humanities scholars, in order to engage with the digital world, must become coders? Isn’t there precisely the danger that I found with my Sinclair machine, that I was becoming a poor coding hobbyist at the expense of good humanities scholarship? I think Ramsay’s use of the term building is important here. In creating Electronic Beowulf, Kevin Kiernan and I were completely dependent on the skilled help of a number of computer scientists and programmers, but we were nevertheless building something which was both a statement about the nature of Beowulf and a vision of what digital technologies can achieve. It is here that the collaboration which is seen as a distinctive feature of the digital humanities comes in. Something like Electronic Beowulf or the projects created by the Department of Digital Humanities at King’s College London simply cannot be achieved without a wide range of skills embracing not only humanities scholarship but also computer science, project management, programming in a variety of forms, interface design, server management and much else.

Much of my thinking about digital projects is informed by my experiences at the British Library in the 1990s, and in particular the Library’s work in designing the original automated systems which gave catalogue access and allowed automated book ordering in the St Pancras building. A naïve user (aka a humanities academic) would assume that to build those systems you either bought a piece of software or got some programmers in to build the system. But building a robust bespoke automated system is more complex than this. Librarians, as users, define the need. An army of analysts define the logical structures required to meet these needs and asses the array of technical possibilities available. These logical definitions are then broken down into units of work. The system was actually designed in an enormous amount of detail on paper, with a mass of flow diagrams, before a line of code was written, and this was in many ways the intellectual heart of the development. An army of programmers then built the various modules defined in the project specifications. The crucial element in this process was not the coding, but rather the design on paper. The analysts who produced this design were the most important (and highly paid) people in the whole process, yet generally they had very limited programming skills. The coders who actually built the system were at the bottom of the food chain, producing elements of the system to order, frequently with only limited understanding of how the whole system worked.

My experience at the British Library taught me that automation should not be equated to coding. In many ways, it is providing the overall vision and defining – on paper – the steps by which that can be realized which is they key part. This, after all, is what computer scientists spend a lot of their time doing. Such a process requires an understanding of the tools and methods available, but is not wholly focused on the creation and deployment of these tools. Again, an analogy from the library world is I think helpful. It is essential for all librarians to have an understanding of cataloguing standards and methods, but it is not necessary for all librarians to be cataloguers. A scholar in the digital humanities should be sufficiently well informed about the technical environment to develop an independent and critical approach to the use of digital methods and resources, but does not necessarily need to be a hands-on programmer.

I worry that an emphasis on coding, and even on building things, is holding the digital humanities back as an academic discipline. We emphasise collaboration, and collaboration is certainly necessary for practitioners of the digital humanities, to build the innovative digital activities, bit are our patterns of collaboration always the right one? The Department of Digital Humanities at King’s College London has worked with dozens of academic partners both at KCL and elsewhere to realize an impressive portfolio of projects. The Department quite rightly stresses collaboration as at the heart of its philosophy. Yet I have been struck in the few months that I have been working in the Department by how often our external academic partners assume that they are the driving force in the collaboration. For them, the humanities scholar is always the person who calls the shots; the digital humanities specialist is simply there to do the donkeywork of programming the machine to do what the academic wants. Collaboration turns out to be a mask used to disguise the true nature of much of the Department’s work which is too often the kind of software development or infrastructural maintenance normally provided by a University service department. Now, it could be argued that academics should not see themselves as superior to information service departments, and I would strongly agree with such a proposition, but it is nevertheless sadly true that academics perceive themselves as at the top of the university tree, and most humanities academics evidently regard digital humanities units (even when these are constitutionally defined as academic departments) as representing something lower down the higher education food chain.

Among the controversies to be considered by The Cologne Dialogue in the Digital Humanities later this month is the question ‘Do the Digital Humanities have an intellectual agenda or do they constitute an infrastructure?’. My colleague Willard McCarty will be presenting an impressive defence of the intellectual component of the Digital Humanities, but one wonders whether the question is correctly put here. The issue is not whether, as Anthony Grafton put it, digital media are always means rather than ends. A lot of tne problem is (as Willard will be suggesting) one of confidence – scholars in the digital humanities too often see themselves as serving longer established academic disciplines and lack the chutzpah to develop their own intellectual programme which doesn’t need topay so much attention to others. The question is how Digital Humanities stops presenting itself as an element of infrastructure, as something which helps other scholars realize their visions, and realizes that it doesn’t need to be dependent on classicists or historians or literary scholars to keep going. Part of the reason why Digital Humanities is treated by other scholars as a support activity is because of its interest in programming and coding – it becomes the gateway by which scholars can gain access to this new digital world. One of the many threats confronting the digital humanities is that it will increasingly become part of the service infrastructure. The suggestion that the term digital humanities will soon disappear as all humanities scholarship becomes digital is predicated on the idea that the digital humanities represents a form of specialist support activity which will soon no longer be required. Certainly, the digital humanities should build things – it should be pioneering the creation of new forms of scholarly discourse in a digital environment – but it should not simply be building things for other scholars, and that has too often been the case.

Indeed, it could be argued that the digital humanities as a whole has fallen into exactly the same trap I was concerned about with my Sinclair ZX 81. By insisting on building things ourselves, we simply come up with slightly amateurish packages which fail to make a large-scale impact or simply repeat existing procedures across different subject domains. The pioneering days of digital editions were very exciting and innovative, but having established what we think of as an accepted procedure, we now repeat that again and again and again in different subject domains for different groups of scholars. When practitioners of the digital humanities are going to build things, these objects should be truly innovative and should restate our sense of what is possible in a digital environment. In the recent Institute of Historical Research seminar on ‘The Future of the Past’, I was very taken by Torsten Reimer’s call for the digital humanities to renounce the sort of digital photocopying that is commonly associated with the creation of digital editions and rather seek to develop genuine innovation that moves into new territory both our cultural engagement and sense of the possibilities of computing. The deployment of TEI and its role in the development of XML were truly innovative and helped create the modern web, but that was nearly twenty years ago. Since then, what true innovation has emerged from the digital humanities? Zotero? Citation managers were available before it appeared. Crowdsourcing? Simply borrowed from other domains. The digital humanities has little to show in the way of true innovation, yet all those engaged with the digital humanities know that the complexities of the humanities offer endless possibilities for the creation of innovative technologies in areas ranging from imaging to nanotechnology. Consider the hyperlink. What a crude mechanism it is. Any textual scholar could imagine more complex and interesting possibilities. The digital humanities could readily look to develop the next stages beyond hypertext. Yet it doesn’t – because it is too busy preparing digital editions for historians who don’t otherwise have access to programming resources.

If the digital humanities is indeed to start realizing its own intellectual agenda, it needs to rethink the nature of its collaboration. It must avoid like the plague that service activity which purports to be collaboration – the sort of Antechrist of the digital humanities. It should instead develop collaboration within the digital humanities, genuine collaboration which is all too rarely seen. To achieve this requires some fundamentally rethinking. Digital humanities centres are certainly part of the problem. Frequently dependent on soft funding, they have perforce to pursue research projects in which the role of the digital humanities is often subservient, and fundamentally a service function. It would be better to have smaller digital humanities departments which have more stable income streams from teaching, and aren’t forced by financial necessity to seek out research projects which reduce the digital humanities element to have a service function. The nature of our projects should change as well. We urgently need to start developing more experimental and risky projects, which challenge existing methods and standards and reach out into new areas.

In short, we should code and we should build, but for ourselves and because (like my experiments in trying to create a Handbook of Dates on the Sinclair ZX81) they feed our own intellectual interests and enthusiasms, and not those of others.        

Read more »