The documentation including a step-by-step guide can be found on Red Hen's Techne Website
The Distributed Little Red Hen Lab™ is a global laboratory and consortium for research into multimodal communication. Its main goal is theory. Its second goal is the development of new computational, statistical, and technical tools to assist research on multimodal communication. See the Overview of the Red Hen Vision and Program. Red Hen is a cooperative of engaged researchers who collaborate closely and contribute power and content to Red Hen and hence to each other and to future researchers. It lacks the resources and organization to serve scholars other than those who work in the cooperative. Red Hen's vast and growing archive is not designed to be a corpus, but some collaborators use it to help create corpora for specific purposes. Researchers who would like to work on yet newer ways of deriving corpora from the archive, on providing user-friendly interfaces for the archive, on improving the tagging of data, or on anything else that would benefit the distributed laboratory are warmly encouraged to write to the directors. See also our Barnyard of Possible Specific Projects—our concrete to-do list. Join us and dig in!
If we want to work on any novel problem, data annotation is recursory and takes up alot of work and effort of a researcher. When the data has to be annotated by multiple annotators the problem elevates.
With Red Hen Lab’s Rapid Annotator we try to enable researchers worldwide to annotate large chunks of data in a very short period of time with least effort possible and try to get started with minimal training.
Rapid annotator is currently a proof-of-concept rather than a finished product. This project aims to deliver a usable product by the end of Google Code of Summer. The final product would be a complete tool for fast and simple classification of datasets and an administrative interface for the experimenters where they can conduct their annotation runs. It broadly comprises of 3 steps, namely
- Uploading their datasets to setup the experiment.
- Assigning annotators datasets for annotation.
- Keeping a track of the annotation progress.