I went to a colloquium on e-Research on Texts and Images at the British Academy yesterday; very, very swanky. Lunch was served on triangular plates, triangular! Big chandeliers, paintings, grand staircase. Well worth investigating for post-doc fellowships one day.
There were also some good papers. Just one or two things that really stuck out for me. There seems to be quite a lot of interest in e-research now around formalising, encoding, and analysing scholarly process. The motivation seems to be that, in order to design software tools to aid scholarship, it's necessary to identify what scholarly processes are engaged in and how they may be re-figured in software manifestations. This is the same direction that my research has been taking, and relates closely to the study of tacit knowledge in which Purcell Plus is engaged.
Ségoléne Tarte presented a very useful diagram in her talk explaining why this line of investigation is important. It showed a continuum of activity which started with "signal" and ended with "meaning". Running along one side of this continuum were the scholarly activities and conceptions that occur as raw primary sources are interpreted, and along the other were the computational processes which may aid these human activities. Her particular version of this continuum was describing the interpretation of images of Roman writing tablets, so the kinds of activities described included identification of marks, characters, and words, and boundary and shape detection in images. She described some of the common aspects of this process, including: oscillation of activity and understanding; dealing with noise; phase congruency; and identifying features (a term which has become burdened with assumed meaning but which should also be considered at its most general sometimes). But I'm sure the idea extends to other humanities disciplines and other kinds of "signal" or primary sources.
Similarly, Melissa Terras talked about her work on knowledge elicitation from expert papyrologists. This included various techniques (drawn from social science and clinical psychology) such as talk-aloud protocols and concept sorting. She was able to show nice graphs of how an expert's understanding of a particular source switches between different levels continuously during the process of working with it. It's this cyclical, dynamic process of coming to understand an artifact which we're attempting to capture and encode with a view to potentially providing decision support tools whose design is informed by this encoded procedure.
A few other odd notes I made. David DeRoure talked about the importance of social science methods in e-Humanities. Amongst other things, he also made an interesting point that it's probably a better investment to teach scholars and researchers about understanding data (representation, manipulation, management) than it is to buy lots of expensive and powerful hardware. Annamaria Carusi said lots of interesting things which I'm annoyed with myself for not having written down properly. (There was something about warning of the non-neutrality of abstractions; interpretation as arriving at a hypothesis, and how this potentially aligns humanistic work with scientific method; and how use of technologies can make some things very easy, but at the expense of making other things very hard.)