Responses to metadata survey

This is a summary of responses to a request for characteristics of a metadata entry tool. The full responses are given below.

June 3, 2015

Summary (this is by way of a brainstorm and is not intended as a definitive list of ‘user stories’)

EASE OF USE / USERFRIENDLYNESS would be on top of my agenda.

Import / export facility to other tools/ spreadsheets

Problem of using file metadata if devices are not set properly – allow the software toi correct dates if harvested from devices

Extract as much metadata from files as can be done

Using phones for metadata entry.

Speaker metadata entry (especially important for sociolinguistic work)

Ability to group files by sessions

a (dynamic) database as backend with static files in various formats generated at frontend as needed.

Browser-based interface is easy

HTML5 / javascript work well and easily

Be able to duplicate records and then edit them to capture similarities

making modular but interconnected tools rather than one monolithic application

Work offline and online (reconciling when in range)

Extensible/ customisable metadata set.

Full responses

Of the other tools, I only know Arbil (and IMDI, predecessor to CMDI I guess) which are both forbidding in that respect.

Also import facilities (and possibly export) to other tools, and spreadsheets for that matter, as it is what many people in fact end up using.

Thank you in advance if your group is developing a tool that does all that is stated in the post above.

Cheers,

Eva Schultze-Berndt

Hugh J Paterson III says:

27 March 2015 at 1:06 pm

One thing I have noticed is that the Metadata from files is not always accurate. That is, some people do not set the date on their DAR correctly or they do not set the date on the camera. This also happens with word documents and PDFs which pull metadata from workflows with word processors. So, while I am a fan of reduced workloads in the digital object submission process, I also recognize that any (well a lot anyway) automatically extracted metadata needs to be visually verified, before it is passed on to the archive as accurate. This means that there needs to be some human verifiable nag + automation verification process, otherwise we just get used to things and click through pop-up screens.

Jeff Good says:

28 March 2015 at 9:53 am

I have been thinking about metadata collections tools quite a lot recently in the context of a team-based project that I am leading. I am not familiar with all the current tools. I’ve spent some time with Arbil, and also played with CMDI Maker, and I hope to get time to work with CMDI Maker more than I have to this point. But, for the project, I am working on now (looking at multilingual practices in Cameroon), we are planning to build a new metadata collection tool (that will form part of a larger metadata management system) for a variety of reasons:

1. I was able to get about 30 free, last generation smartphones from this lab at my own institution: https://phone-lab.org. This meant that I suddenly had access to a large number of, in effect, handheld computers that could be distributed across the team.

2. The project team is intended to be composed of, perhaps, as many as fifteen Cameroonian students, as well as some others, over the course of three years. Metadata management will be a major concern, and we want to develop a tool that captures as much metadata as possible at the point of collection rather than after the recording is made (which, I think, is a more typical workflow). By building a metadata capture app into a smartphone, we hope to facilitate such “real-time” metadata collection both by having a form that needs to be filled out before recording begins and by using all the metadata the smartphone already may “know” (e.g., owner, date, location).

3. For a project on multilingualism, having good, consistent, updated metadata on speakers is crucial. The main requirements described in the post above are centered on the files that are created, but, for my project, we need a tool that also makes speaker metadata entry as easy as possible. Our hope is to have this take place at the time of session metadata creation, using the camera capability of the smartphone to help us add a picture to the record of a speaker. We can also place into this workflow prompts to help the collectors discuss issues of informed consent with the consultants, as well as issues of data restrictions, etc. (Obviously, a technological solution to ethical concerns has limitations, and this is only one part of a broader ethics training plan.)

Obviously, a general metadata tool will have different needs, but I thought I’d mention some of the reasons why I’m working on a new tool. Point 1 won’t apply to other projects, but points 2 and 3 raise some more general issues.

The huge advantage of CMDI over Arbil in my experience are the two features: a) to create sessions according to file name by bundling different file types with the same name into one session, and b) the copy function over sessions. Also, CMDI is just really great (better than Arbil in my view) to handle, accessible and very intuitive, thus friendly to a large range of users which is obviously important, so that with the right workflow setup large numbers of sessions can be created in very short time (I did once 150 sessions within two days or so which would have taken much longer with Arbil, I think...).

CMDI seems to reliably create valid IMDI and thus correctly link files in LAMUS for instance. With Arbil, I had repeatedly problems with uploading thus created IMDI files into the DoBeS archive via LAMUS, and also already uploaded bu hitherto unlinked files would not be linked automatically, although the files were specified in the IMDI file.

Based on this, my tentative ideas for a very good tool would essentially be a slightly expanded / modified CMDI. Some features to change or add:

a) add options to store other types of data (i.e. not only Actors): project-related data, place of recording, text variety and related commentary

b) as for data on Actors, some more (optional!) detail on Actors relationships would be desirable (keeping otherwise the flat input structure), possibly through linking to KinOath (?)

c) linking of metadata to annotation, and then later both metadata and certain types of annotation would need to be exportable or analysable as such, for instance into a spreadsheet. A typical area of relevance for this would be variationist studies of language use based on corpus data, but of course it is also relevant for grammaticography and lexicography etc.

S Schnell

I would add that an ideal metadata helper tool in my view will have a (dynamic) database as backend with static files in various formats generated at frontend as needed. This means, in particular, that a meta-description may internally consist in large part of links to a number of database objects (languages, participants, locations, etc.) rather than of literal copies of (xml) fragments, with complete descriptions generated only when needed. This also means that when a change is to be made e.g. in participant (actor) information, the user will not have to do a search/replace across the metadata files, nor to drag&drop an (updated) participant item to all the files concerned, but to make the change once in the database and refresh/regenerate the metadata files.

A Arkhipov

here's my 2 cents. A format agnostic metadata tool that accompanies the workflow from recording to transcription to annotation all the way through archiving would be wonderful. However, even if it is output format agnostic, the data categories and the data structures have to fit all potential output formats. Since the data structures of the different format are far from isomorphic, this is already a conceptional challenge.

Personally I would love a tool that would also read and write embedded metadata – BWF-Metadata in the case of WAV-files for example – as well as the normal stand-off metadata files. Even though I never worked with ExSite9 myself, I have to say that the idea to extract embedded metadata from EXIF files seems a very good approach. Digital media files come with a decent amount of meta-information that is reasonably easy to extract, although the format and vendor dependent variation is somewhat annoying.

I can’t say anything meaningful about SayMore. In my experience, Arbil is too powerful for most users and the interface is rather unintuitive. The fact that it readily accepts any flavour of CMDI makes it very useful in the European CLARIN context which requires CMDI metadata, but outside of this context it is probably not the first choice of users.

Our exerience with CMDI Maker are really positive. Users are very comfortable with browser based interfaces, installation and even updates and bug fixes aren’t cumbersome for users. The functionality of the CMDI Maker is clearly restricted, which is the result of the goal to add a simple tool to the IMDI-/CMDI-ecosystem that has a different focus than Arbil. Implementing the ELDP metadata format in the CMDI Maker was easy and extending and internationalizing the tool beyond IMDI/CMDI formats is also straightforward. We have tested it with a metadata format used in art collections.

In my opinion, the modern HTML5 & JavaScript web platform would be a good fit for more complex metadata entry tools as the functionality of the modern web platform is really sufficient for applications of this type. Offline enabled web apps are in general a very good fit for the requirements our user community has and on top of it, it is relatively easy to find developers for the HTML5 & JavaScript stack.

F Rau

I mentioned that I had tried using Omeka as a web-deployed metadata creation tool. It worked OK except that I could not find a way to replicate records and then edit them, and this is in my opinion an essential functionality in a metadata tool. Maybe my work practices are not normal, but I consistently want to create groups of metadata records which share large amounts of information and re-entering everything from scratch is just inconceivable! Whether the relevant parts of Omeka's code could be used as the basis for something better is not a question I can answer though.

S. Musgrave

I think it is also worth looking existing tools which already do some of the needed tasks. For instance vocabulary management systems such as openskos and other un/related tools. After my experience with Arbil I am very keen on making modular but interconnected tools rather than one monolithic application. Managing the interconnections between each tool is quite a complex task to do well, but the benefit is that individual components can be replaced with new developments or swapped according to the users needs.

Linking parts of the metadata to a dynamic database can be advantageous but also problematic if not done well. For instance if the metadata describes a participant in a recording and that is shared in an earlier and subsequently in a later recording. That participant might have known 2 languages at the time of the first recording but subsequently learnt additional languages by the time of the second. The metadata for each recording needs to reflect this difference in the participant for each point over time and changing a single global record would corrupt the first. This issue touches on some of the general needs of entity management. Providing aunique identifier for each participant for instance is an important start, but it also requires a well designed entity management system that allows for changes in the properties over time as well as splitting and merging entities while also keeping a history of these changes.

I agree with that HTML5/JS is advantageous, in particular being a good distribution format for an application on a variety of platforms. I am not convinced however that it is a very scaleable and would be concerned about developing more complex applications without the necessary language features. There are however many solutions to this problem such as GWT which cross compiles to HTML5/JS while still providing language features that help with scalability.

P. Withers

I think there's a big need for improvements here, and the main issue is that we need a tool that has as straightforward an interface as it has a powerful underlying chassis, because we both need something that will have the power of Arbil but the usability of a Google webapp, to suit a variety of potential users. And we need something that will continue to be developed. I had actually thought that ExCite9 was a good start, but I guess it tanked? At least, I remember it being hard to get support. Arbil is powerful, but the documentation is a bloody nightmare and it looks as though it's been designed to punish users through incomprehensible terminology, strange icons, unintuitive procedures, etc. etc.. Meanwhile CMDI maker is making great progress on usability, but is *completely bizarre* in the way it stores data, in the Firefox cache. This is a bit like storing your jewelry in the vegetable drawer of your fridge. Sooner or later, someone's sure to clean it out for some reason.

To me, it would make the most sense to design a tool that was XML-based, had a very simple default interface (type project name here:, import a media file here:, add x type of description of your media file here:,), and was easily customizable - both so that users could easily add and define fields and so that archives could easily design and distribute templates.

Another issue is that we would want easy ways to import offline data. So, imagining for example that an archive is primarily off the field site, on a desktop in Melbourne or wherever, but people are working offline on their own terminals in the field and entering data through whatever platform (spreadsheet, smartphone app, laptop copy of the program, whatever), we want a way to easily export that data from wherever it's entered and import/integrate it into the corpus, perhaps several times per year.

I'm not sure if that's at all helpful, but the botttom line is I think that we need a tool that does everything Arbil does, but more user-friendly-ly, and to better manage the fact that we have multiple people entering data offline without access to the whole corpus, and need to easily merge the products of their work with the main corpus.

M Post

I agree with many of the comments and congratulate you, Nick, for getting this initiative going.

This thread of emails already gets into the discussion which should actually happen as a reaction to the blog entry and on the planned event.

I just wanted to draw your attention to yet another tool/workflow within the IMDI/CMDI realm, COALA:

http://clarin.phonetik.uni-muenchen.de/BASWebServices/#/services/Coala

-- see also the Youtube instruction video: https://www.youtube.com/watch?v=EaIHujLkOdc

(The link in the video and in the description contains a "www." to much.)

The combination of Excel-Tables and IMDI/CMDI-files generated from them is a way that I personally also have been following, see my attached presentations that present the workflow at Museu Goeldi/Bel/em/Brazil. I am planning to replace the current set of emacs lisp scripts by a more flexible set of python scripts. The main point of our workflow is that also the filenames are creates following a defined standard, which has been a requested feature by many. See Nikolaus Himmelman's older remarks, also attached.

S Drude