Call for Labs Participation


CLEF 2016: Conference and Labs of the Evaluation Forum

Information Access Evaluation meets Multilinguality, Multimodality and Interaction

5-8 September 2016, Évora - Portugal

 

Labs registration: http://clef2016-labs-registration.dei.unipd.it/

 

The CLEF Initiative (Conference and Labs of the Evaluation Forum, formerly known as Cross-Language Evaluation Forum) is a self-organized body whose main mission is to promote research, innovation, and development of information access systems with an emphasis on multilingual and multimodal information with various levels of structure.

CLEF 2016 is the seventh CLEF conference continuing the popular CLEF campaigns, which have run since 2000 contributing to the systematic evaluation of information access systems, primarily through experimentation on shared tasks. CLEF 2016 consists of an independent conference and a set of labs and workshops designed to test different aspects of mono and cross-language Information retrieval systems.

Each lab focuses on a particular sub-problem or variant of the retrieval task as described below. Researchers and practitioners from all segments of the information access and related communities are invited to participate, choosing to take part in any or all evaluation labs. At CLEF 2016 are offered seven labs and one workshop:

 


Lab details

CLEFeHealth

Usage scenario of CLEFeHealth is to ease and support patients, their next-of-kins and clinical staff in understanding, accessing and authoring eHealth information in a multilingual setting. eHealth documents are much easier to understand after expanding shorthand, correcting the misspellings, normalising all health conditions to standardised terminology, and linking the words to a patient-centric search on the Internet. This year, CLEF eHealth organises 3 tasks:

Task 1: Handover Information Extraction

Task 2: Multilingual Information

Task 3: Patient-Centered Information Retrieval

Lab coordination:

  • Lorraine Goeuriot (Université Joseph Fourier, FR - lorraine.goeuriot [at] imag.fr),

  • Liadh Kelly (Trinity College Dublin, IRL - liadh.kelly [at] scss.tcd.ie)

Lab website: https://sites.google.com/site/clefehealth2016/


ImageCLEF

For the 2016 edition, ImageCLEF will organize three main tasks with a global objective of benchmarking automatic annotation, indexing and retrieval of images. The tasks tackle different aspects of the annotation and retrieval problem and are aimed at supporting and promoting cutting-edge research addressing the key challenges in the field. A wide range of source images and objectives are considered, such as general multi-domain images for object or concept detection, as well as domain-specific tasks such as labelling and separation of compound figures from biomedical literature and scanned pages from historical documents.

  • Image Annotation A task aimed at the development of systems for automatic multi-concept image annotation, localization and subsequent sentence description generation.

  • ImageCLEFmed: The Medical task Addresses the problems of labelling and separation of compound figures from biomedical literature and detection of bones and body part imaged in x-rays.

Lab coordination:

  • Mauricio Villegas (Universitat Politècnica de València, SP - mauvilsa [at] upv.es)

  • Henning Müller (University of Applied Sciences Western Switzerland in Sierre, CH - henning.mueller [at] hevs.ch)

Lab website: http://www.imageclef.org/2016


LifeCLEF

The LifeCLEF 2016 lab proposes three data-oriented challenges, in the continuity of the two previous editions of the lab, but with several consistent novelties intended to push the boundaries of the state-of-the-art in several research directions at the frontier of information retrieval, machine learning and knowledge engineering including: Large Scale Classification, Weakly-supervised and open-set classification, Transfer learning & fine-grained classification, Crowdsourcing models and algorithms, Interactive and mobile search, Scene understanding, Focused crawling and record linkage. More concretely, the lab is organized around three tasks:

  • BirdCLEF: an audio record-based bird identification task

  • PlantCLEF: an image-based plant identification task

  • SeaCLEF: a visual-based sea-related organisms monitoring task

Lab coordination:

  • Alexis Joly (INRIA Sophia-Antipolis - ZENITH team, Montpellier, FR - alexis.joly [at] inria.fr)

  • Henning Müller (University of Applied Sciences Western Switzerland in Sierre, CH - henning.mueller [at] hevs.ch)

Lab website: http://www.imageclef.org/node/197


Living Labs for IR (LL4IR)

The main goal LL4IR is to provide a benchmarking platform for researchers to evaluate their ranking systems in a live setting with real users in their natural task environments. The lab acts as a proxy between commercial organizations (live environments) and lab participants (experimental systems), facilitates data exchange, and makes comparison between the participating systems.

CLEF 2015 sees the second edition of the lab, which features one task:

  • Task 1 - Product Search and Web Search

Lab coordination:

  • Krisztian Balog (University of Stavanger, N - krisztian.balog [at] uis.no)

  • Liadh Kelly (Dublin City University, IRL - liadh.kelly [at] scss.tcd.ie)

  • Anne Schuth (University of Amsterdam, NL - anne.schuth [at] uva.nl).

Lab website: http://living-labs.net/clef-ll4ir-2016/


News Recommendation Evaluation Lab (NEWSREEL)

CLEF 2016 is the third iteration of this lab. NEWSREEL provides two tasks designed to address the challenge of real-time news recommendation. Participants can: a) develop news recommendation algorithms and b) have them tested by millions of users over the period of a few weeks in a living lab. The following tasks are offered:

  • Task 1 - Benchmark News Recommendations in a Living Lab: benchmarking news recommendation algorithms in a living lab environment: participants will be given the opportunity to develop news recommendation algorithms and have them tested by potentially millions of users over the period of one year.

  • Task 2 - Benchmarking News Recommendations in a Simulated Environment: simulates a real-time recommendation task using a novel recommender systems reference framework. Participants in the task have to predict users' clicks on recommended news articles in simulated real time.

Lab coordination:

  • Frank Hopfgartner (University of Glasgow, UK - frank.hopfgartner [at] gmail.com)

  • Torben Brodt (plista GmbH, Berlin, DE - tb [at] plista.com)

Lab website: http://www.clef-newsreel.org/


Uncovering Plagiarism, Authorship and Social Software Misuse (PAN)

This is the 13th edition of the PAN lab on evaluation of uncovering plagiarism, authorship, and social software misuse. PAN offers one task at CLEF 2016 with the main goal to provide for sustainable and reproducible evaluations, to get a clear view of the capabilities of state-of-the-art-algorithms.

  • Task 1 - PAN Lab on Digital Text Forensics

Lab coordination: pan [at] webis.de

  • Martin Potthast (Bauhaus-Universität Weimar, DE),

  • Benno Stein (Bauhaus-Universität Weimar, DE),

  • Paolo Rosso (Universitat Politècnica de València, SP),

  • Efstathios Stamatatos (University of the Aegean, GR).

Lab website: http://pan.webis.de


Social Book Search (SBS)

The Social Book Search (SBS) Lab investigates book search in scenarios where users search with more than just a query, and look for more than objective metadata. Real-world information needs are generally complex, yet almost all research focuses instead on either relatively simple search based on queries or recommendation based on profiles. The goal is to research and develop techniques to support users in complex book search tasks. The Social Book Search Lab consists of three tracks:

  • Interactive Track: a user-oriented interactive task investigating systems that support users in each of multiple stages of a complex search tasks. The track offers participants a complete experimental interactive IR setup and an exciting new multistage search interface to investigate how users move through search stages.

  • Suggestion Track: a system-oriented task to suggest books based on rich search requests combining several topical and contextual relevance signals, as well as user profiles and real-world relevance judgements.

  • Mining Track: an NLP/Text Mining track focussing on supporting users in discussion forums by detecting and linking book titles and author names to entity metadata.

Lab coordination:

  • Jaap Kamps, Marijn Koolen, Hugo Huurdeman (University of Amsterdam, NL - kamps [at] uva.nl, marijn.koolen [at] uva.nl, h.c.huurdeman [at] uva.nl)

  • Toine Bogers, Mette Skov (Aalborg University, Copenhagen, DK - toine [at] hum.aau.dk, skov [at] hum.aau.dk)

  • Mark Hall (Edge Hill University, Ormskirk, UK - hallmark [at] edgehill.ac.uk)

  • Marijn Koolen (University of Amsterdam, NL - marijn.koolen [at] uva.nl)

  • Hugo Huurdeman (University of Amsterdam, NL - h.c.huurdeman [at] uva.nl

  • Mette Skov (Aalborg University Copenhagen, DK - skov [at] hum.aau.dk)

Lab website: http://social-book-search.humanities.uva.nl/#/overview



 


Workshop details

Cultural Microblog Contextualization (CMC)

Cultural Microblog Contextualization CLEF 2016 WorkShop aims at developing processing methods and resources to mine the social media sphere surrounding cultural events such as festivals. Tweets linked to an event make a dense, rich but very noisy corpus. Content is often imprecise, duplicate, or non-informative. For its first edition, this WorkShop will give access for registered participants to a massive collection of microblogs, urls and images all related to cultural festivals in the world. This access will allow researchers in IR and NLP to experiment large scale multilingual microblog search, WikiPedia entity search, and automatic summarization. Extensive textual references will be provided by organizers. The following tasks are offered:

  • Task 1 - Cultural Multilingual microblog contextualization based on WikiPedia

  • Task 2 - Cultural MicroBlog Search based on WikiPedia entities

  • Task 3 - TimeLine illustration based on Microblogs


Coordination:

  • Georges Linarès and Eric SanJuan (Université d'Avignon, FR - firstname.lastname [at] univ-avignon.fr)

  • Lorraine Goeuriot and Philippe Mulhem (Université Grenoble Alpes, FR - firstname.lastname [at] imag.fr)

  • Josiane Mothe (Institut de Recherche en Informatique de Toulouse, FR - mothe [at] irit.fr)

Website: https://mc2.talne.eu/~cmc/spip/


 


Lab registration

Participants must register for tasks via the following website:
http://clef2016-labs-registration.dei.unipd.it/

 

Data

The training and test data are provided by the organizers, which allow participating systems to be evaluated and compared in a systematic way.

 

Worshops

The Lab Workshop sessions will take place within the CLEF 2016 conference at the conference site in Évora. Lab coordinators will present a summary of their lab in an overview presentations during the plenary scientific paper sessions in the CLEF 2016 conference, to allow non-participants to gain an overview of the motivation, objectives, outcomes and future challenges of each Lab. The separate Lab Workshop sessions provide a forum for participants to present their results (including failure analyses and system comparisons), description of retrieval techniques used, and other issues of interest to researchers in the field. Participating groups will be invited to present their results in a joint poster session.

 

Publication

All groups participating each evaluation Lab are asked to submit a paper for the CLEF 2016 Working Notes. These will be published in the online CEUR-WS Proceedings and on the conference website.

Two different and separate types of overviews will be produced by Lab Organizers, one for the Online Working Notes, and one for Conference Proceedings (published by Springer in their Lecture Notes for Computer Science - LNCS series).

 

Timeline

The timeline for 2016 Labs is as follows:

  • Labs registration opens: October 30, 2015

  • Registration closes: April 22, 2016

  • End Evaluation Cycle: May 4, 2016

  • Submission of Participant Papers [CEUR-WS]: May 25, 2016

  • Submission of Lab Overviews [LNCS]: June 3, 2016

  • Notification of Acceptance Lab Overviews [LNCS]: June 10, 2016

  • Camera Ready Copy of Lab Overviews [LNCS] due: June 17, 2016

  • Notification of Acceptance Participant Papers [CEUR-WS]: June 17, 2016

  • Camera Ready Copy of Participant Papers and Extended Lab Overviews [CEUR-WS] due: July 1, 2016

 


Organization

Conference Chairs:

  • Norbert Fuhr, University of Duisburg-Essen, Germany

  • Paulo Quaresma, University of Évora, Portugal

Program Chairs:

  • Birger Larsen, University of Aalborg, Denmark

  • Teresa Gonçalves, University of Évora, Portugal

Lab chairs:

  • Craig Macdonald, University of Glasgow, UK

  • Krisztian Balog, University of Stavenger, Norway

Lab committee:

  • Martin Braschler, Zurich University of Applied Sciences, Switzerland

  • Nicola Ferro, University of Padua, Italy

  • Donna Harman, National Institute for Standards and Technology (NIST), USA

  • Maarten de Rijke, University of Amsterdam UvA, The Netherlands

Local organization committee:

  • Irene Rodrigues, University of Évora, Portugal

  • José Saias, University of Évora, Portugal

  • Luís Rato, University of Évora, Portugal

Proceedings Chairs:

  • Linda Cappellato, University of Padua, Italy

  • Nicola Ferro, University of Padua, Italy