The conference attracts some of the leading practitioners in the field so it’s a real privilege to be able to hear from and speak to people who are leading in research and development – creating tools, developing workflows and undertaking research into all aspects of digital management and preservation.
It will take a while to digest everything – there was so much to learn! – but I thought I would gather together some “highlights” of the session while still fresh in my mind.
The conference opened with a keynote from Bob Kahn who reflected on the need for interoperability and unique identifiers with digital objects. The world we live in is a networked one and as we conceive of information and objects as linked to one another over networks so we must find ways of describing them in question and unambiguous ways. When objects can exist anywhere and in several places at once so we need to find unambiguous ways of describing them.
To complement this I attended a workshop on persistent identifiers which gave an extremely helpful introduction to the world of URNs, URLs, PURLs, Handles, DOIs and the rest. Sometimes it can seem a little like acronym spaghetti but the presenters Jonathan Clark, Maurizio Lunghi, Remco Van Veenendaal, Marcel Ras and Juha Hakala did did their best to untangle it for us. Remco van Veenendaal introduced a great online tool from National Archives of the Netherlands which aims to guide practitioners towards an informed choice about which identifier scheme to use. You can have a go at it here and the Netherlands Coalition for Digital Preservation are keen for feedback.
What is particularly useful about it is that it explains in some detail at each stage about which PiD system might be particularly good in specific circumstances allowing for a nuanced approach to collections management.
Current persistent identifier systems do not cope well with complex digital objects and likely future developments will be around tackling these shortcomings. Sadly the current widely used systems have already developed along separate lines to the extent that they cannot be fully aligned – sadly not the interoperable future we are all hoping for.
The second keynote came from Sabine Himmelsbach of the House of Electronic Art in Basel and was a lively and engaging account of a range of digital artworks and how digital preservation and curation has to work closely with artists to (re)create artworks. It threw up many philosophical questions about authenticity an integrity not to mention the technical challenges of emulation and preservation of legacy formats. This was a theme returned to again and again in various sessions throughout the conference as was the constant refrain of how the main challenges are not necessarily technological.
The conference had so many highlights it’s very hard to choose from amongst them. There were a number of papers looking specifically at the issues around the long term preservation of research data, which is of particular interest to the work we are undertaking at Lancaster University. There was a fascinating paper given by Austrian researchers from SBA research and TU Wien (the Vienna University of Technology) looking specifically at the management of the so-called “long tail” of research data – that is the wide variety of file formats spread over a relatively small number of files which characterises the management of research data in particular, but also of relevance for the management of legacy digital collections and digital art collections. This discussion was returned to by Jen Mitcham (University of York) and Steve Mackey (Archivum) talking about preserving Research Data and also in my final workshop on file format identification. Jay Gattusso – nobly joining in at 4 am local time from New Zealand – talked about similar issues at the National Library of New Zealand involving legacy digital formats where there were only one or two examples.
One of the posters also captured this point perfectly – “Should We Keep Everything Forever?: Determining Long-Term Value of Research Data” from the team at the University of Illinois at Urbana-Champaign which looked at trying to create a methodology for assessing and appraising research data.
Plenty of food for thought there about how much effort we should put into preserving, how we prioritise and how we appraise our collections.
The final keynote was from Dr David Bosshart of the Gottlieb Duttweiler Institute – a provocative take on the move from an industrial to a digital age. He had a very particular view of the future which caused a bit of a mini-twitter storm from those who felt that his view was very narrow; after all more than half the world is not online. Whilst his paper was no doubt deliberately designed to create debate, it highlighted the issues about where we direct our future developments and what our ultimate goals are. This is common to all archives/preservation strategies: whose stories are we preserving? and how are we capturing complex narratives? This issue was revisited later in a workshop on personal digital archiving. Preservation can only happen where information is captured in the first place. It can be about educating and empowering people to capture and present their own narratives.
There is still a lot for me to think about from such a varied and interesting conference. There was very little time for leisure but there were wonderful evening events which the conference organisers arranged – a drinks receptions at the National Library of Switzerland and a conference dinner at the impressive fifteenth century Rathaus. There are lots of conference photos online which give a flavour of the event.
And speaking of flavours I couldn’t visit Switzerland and not try a fondue…. Delicious!
Eating fondue
Rachel MacGregor
(all photos author’s own).
Great post Rachel and glad you had a chance to enjoy the Swiss delicacies too. I would be keen on your viewpoint on whether the digital preservation landscape has become fairly stable now or do you think it is still evolving significantly?
Thanks 🙂
Glad you enjoyed it! I think (and I know this is a politician’s reply) that in some areas certain things are quite well established and relatively stable such as the development of standards such as OAIS which give us a framework to use and the METS schema. However in other areas it feels like the community is still catching up with the proliferation of file formats and in the sheer quantity of digital data which is being produced. I think one of the ways forward on this is by promoting digital literacy (and there was some interesting discussion about this at the conference) to try and encourage data creators to become more involved in the process.
Hi Rachel, Thanks for the write up of you iPres. Interesting reading.
Minor correction, I’m at the national library of New Zealand, not the national archive.
Thanks,
J
Many apologies for this – probably some subconscious professional bias of mine manifesting itself… I have updated the post and the link.