You are currently browsing the tag archive for the ‘authenticity’ tag.

Podcast available as a video podcast from the NEWS! section at http://www.liv.ac.uk/lucas/

Duranti points out that digital preservation places some new obligations upon archivists in addition to the ones recognised under paper preservation theory, mainly to do with authenticity. The archivist has to become a “designated trusted custodian,” with input into record decisions at the very beginning of the record lifecycle. Relevant tradtional archivist responsibilities include:

Read the rest of this entry »

borghoff.jpg Notes from Borghoff et al. Emulation has some notable advantages over migration, not least that it guarantees the greatest possible authenticity. The document’s original bitstream will always remain unchanged. All (!) we have to do is make sure that a working copy of the original app is available. As it’s impossible to keep the hardware running, we have to emulate the original system on new systems.

In theory there are no limitations on the format of the record- even dynamic behaviour should be preserved ok. But there are three massive worries with emulation: (a) can it be achieved at reasonable cost?, (b) is it possible to resolve all the copyright and legal issues involved in running software programs over decades? and (c) will the human-computer interface of the long term future be able to cope with the mouse-and-keyboard interface of today’s applications? The only realistic way to answer (c) would be to create a “vernacular copy” (p.78) but this strikes me as migration under a different name – just my own thought.

Read the rest of this entry »

There is the issue of authenticity. The individual printing out the record often has a certain level of control over how that document is printed: fields or text can be removed from the printed version even if they remain in the digital original. Printing from spreadsheets usually results in the paper copy having only values and calculated data, not the formulas, or comments. This means that a paper document cannot necessarily be trusted as a full and complete equivalent of a digital record. Yet many people will allow the digital original to be deleted, or get lost, after the paper copy has been created. This may not be an issue for your home computer, but it may well be an issue in an organisation where different members of staff are printing different things.

How do you access paper? – need a supervised searchroom, really, with all the costs that entails. And BS5454 storage. Digital preservation is actually cheaper than paper, if properly handled.

Digital records have a feature not present in paper ones, namely behaviour. A paper document is a fixed item, but digital documents are sometimes interactive, and for some of these the behaviour is an essential part of the meaning. Spreadsheets are a good example.

Also, for some organisations there is a legal aspect. If the original document is digital, then it has to be preserved digitally.

This is simply the fact that a record should be what it says it is. With paper it is obvious, but with electronic records less so.

The first feature is integrity, that the record has retained its essential meaning, even if some aspects of its appearance and functionality have changed. There is no quick way to define “essential” as it will vary from record to record. Even colours may mean something (such as the colours on a map).

The second feature is authentication, that (a) the record has really been created by the person or organisation that it claims to have been created by, and (b) all aspects of its management since creation have been documented.

So, how do you capture this information? The answer is metadata.

Brief article about the project in TNA’s RecordKeeping for Autumn 2004.

Original project

The original project was only possible because of a Government programme which had put a BBC Micro into every school in the country by 1980-81, creating a user base of compatible computers. School children in 1986 entered their own data onto their school computers, which was copied onto floppy disks or tapes sent to the BBC. All these text and images, together with analogue photographs of OS maps, were transferred to analogue videotape. The community data finally totalled 29,000 photographs and 27,000 maps. The whole database was then assembled on master videotapes from which the final videodiscs were produced. The monitor was usually a TV, which imposed a limit on the level of detail visible at once: users needed to switch between maps, pictures and text.

Restoration project

There were a number of parallel rescue projects but the one which actually worked was a collaboration between TNA, BBC and others. It did not rescue data from the videodiscs, but from the master tapes.

Independently, LongLife Data Ltd had developed a new PC interface to the community data. It works in the same way as the real one but because a modern monitor has higher resolution than a 1980s TV screen, pictures and text can be shown simultaneously. This is the version now available on the web.

Alans thoughts

  • the data was restored from analogue videotapes, not from the videodiscs or from the submitted floppy disks. After 15 years the tapes were still readable. So in a sense it’s a straightforward media refreshing thing.
  • the new interface is not an exact emulation of the old interface. It is a wholly new app. The current browsing experience has therefore lost authenticity. (Though the data is the same.)
  • can we find out anything about the authenticity of the data itself?

Chris Rushbridge in Ariadne, February 2006

http://www.ariadne.ac.uk/issue46/rusbridge/intro.html, accessed 19 Dec 07.

Rushbridge thinks that the digital preservation case has been over-argued, which has led to a backlash, and has also been counterproductive in that it makes digital preservation look far more expensive than it actually is; so no one then pays for it.

File format change: Rushbridge challenges people to actually think of an old commercial file format which is genuinely unreadable today, rather than simply “obsolete”, which tends to be a euphemism for “difficult to retrieve.” Rushbridge defines unreadable as ‘total loss of information content,’ rather than just a partial loss. As far as I know, no one’s met his challenge. (File formats created for specific problems, or for devices like cameras, do indeed get unreadable quickly.) There is a perception that files are unreadable but that is just a perception. File formats have actually stabilised over the years, as the infomation revolution sorts itself out.

Migration: rather than every 3-5 years, this might only need to be done every 10-15 years. So it’s cheaper than we initially thought.

Fidelity: because there is no way of knowing what the future designated communities will actually be interested in, there is pressure to keep all aspects of a record, just in case. This is very expensive, so it leads to less funding, and fewer things preserved. So Rushbridge is in favour of dessicated formats, limiting the documents just to reduced sets of significant properties, but which are much easier to preserve. But keep the original bitstream as well. So, anyone who is just after data can see the desiccated format, and be happy, while the scholars after more exact properties can put the effort in to recapture the full functionality (and they are the ones who pay for it). It means you still have to keep good documentation and metadata. AA: seems a bit like our CALS policy, though I need to add a bit about keeping the original bitstream.

Costs: digital preservation is cheaper than paper [this is like my Domesday book example]. All preservation is expensive, but digital only seems expensive because it is new, and is not yet costed into anything. Paper archives and libraries are costed, and we have grown used to the costs. The biggest single problem in digital preservation is money, and that’s partly because it is short-term project funded. Also, we need to spend the money wisely. If we think in terms of 1000 years, then we end up paying loads on a handful of documents, so we lose more. Perhaps we should just think about the next generation, instead.

Should we simply emulate the app? Rothenberg asks the question whether we need to run the specific software which created the record, or some similar program that can at least partially interpret it? The latter might seem sufficient but Rothenberg thinks the similar program will fall foul of obsolesence just as much as the primary one, so stick with the primary.

It occurs to me too that if your priority is authenticity then the replacement program won’t be good enough.