Call for Challenges

Overview

The ESWC organizers are glad to announce that the Challenges Track will be included again in the program of ESWC 2018!

Five challenges were held last year [1] and allowed the ESWC2017 conference to attract a broader audience beyond the Semantic Web community, also spanning across disciplines such as Recommender Systems or Knowledge Extraction.

For the 2018 edition, a call for challenges is open in order to allow the selection of challenges to be held at the conference.

The purpose of challenges is to showcase the maturity of state of the art methods and tools on tasks common to the Semantic Web community and adjacent disciplines, in a controlled setting involving rigorous evaluation.

Semantic Web Challenges are an official track of the conference, ensuring significant visibility for the challenges as well as participants. Challenge participants are asked to present their submissions as well as provide a paper describing their work. These papers must undergo a peer-review by experts relevant to the challenge task, and will be published in the challenge proceedings.

Next to the publication of proceedings, challenges at ESWC2018 will benefit from high visibility and direct access to the ESWC audience and community.

[1] https://2017.eswc-conferences.org/call-challenges 

Challenge Proposals


Challenge organizers are encouraged to submit proposals adhering to the following criteria:

  • At least one task involving semantics in data. The task(s) should be well defined and related to the Semantic Web but not necessarily confined to it. It is highly encouraged to consider tasks which involve other, highly related communities, such as NLP, Recommender Systems, Machine Learning or Information Retrieval. If multiple tasks are provided the tasks should be independent so that participants may choose which to participate in.
  • Task descriptions are likely to be interesting to a wider audience. We encourage the challenge organizers to propose at least one basic task that can be addressed by a larger audience from their community. Engaging with your challenge audience and obtaining feedback from your target group on the task design might be helpful for shaping the task and ensuring sufficient number of participants.
  • Clear and rigorous definition of the tasks.  For each task, you should define a deterministic and objective way to verify if the goal of the task has been achieved, and to which extent it has been achieved (if applicable). The best way is usually to provide detailed examples of input data and expected output. The examples must cover all the possible situations that can occur while performing the task, and should leave no place to ambiguity about whether in a particular case the task is done or not.
  • Valid dataset (if applicable). If accepted, you should find or create a dataset that will be used for the challenge. In any case, you must specify the provenance of the dataset (if it contains human annotation – how were those obtained). You must make sure you have the right to use/publish this dataset and clearly state the license for its use within the challenge. The dataset should be split at least in two parts – the training part, and the evaluation part. The training part contains the data, and the results that should be obtained when performing the task. As for the evaluation part, you should only publish the data, and make sure that the correct results have not previously been available to the participants. When proposing the challenge you must provide details on the dataset and on the way it is/will be created – the dataset can be made available later.
  • Challenge Committee. Composed of at least 3 respected researchers with experience in the tasks of the challenge. They help evaluate the papers submitted by the participants, and also validate the evaluation procedure.
  • Evaluation metrics and procedure.

For each task there must be at least two objective criteria (metrics) (e.g. precision and recall). The evaluation procedure and the way in which the metrics will be calculated must be clearly specified and made transparent to participants (having in the website of the challenge evaluation scripts available for participants would be a good practice to use).

Among the selection criteria for choosing the supported challenges are:

  • Potential number to interested participants
  • Rigor and transparency of the evaluation procedure
  • Relevance for the Semantic Web community
  • Endorsements (from researchers working on the task, from industry players interested in results, from future participants)

Important Dates


  • Challenges proposals due: Friday, December 22th, 2017 – 23:59 Hawaii Time
  • Challenges chosen/merged – notification to organizers sent: Friday, December 29th, 2017
  • Training data ready and challenges Calls for Papers: Monday, February 12, 2018
  • Challenge papers submission deadline (5 pages document): Friday, March 30th, 2018
  • Challenge paper reviews: Thursday, April 26th, 2018
  • Notifications sent to participants and invitations to submit task results: Monday, April 30th, 2018
  • Test data (and other participation tools) published: Monday, May 7th, 2018
  • Camera ready papers for the conference (5 pages document): Monday, May 14th, 2018
  • Submission of challenge results: free choice of organizers
  • Proclamation of winners: During ESWC2018 closing ceremony
  • Camera ready paper for the challenge post proceedings (15 pages document): Friday, July 6th, 2018 (tentative deadline)

Submission Details


The challenges proposals should contain at least the following elements:

  • A summary description of the challenge and tasks
  • How the training/testing data will be built and/or procured
  • The evaluation methodology to be used, including clear evaluation criteria and the exact way in which they will be measured. Who will perform the evaluation and how will transparency be assured?
  • The anticipated availability of the necessary resources to the participants
  • The resources required to prepare the tasks (computation and annotation time, costs of annotations, etc.)
  • The list of challenge committee members who will evaluate the challenge papers (please indicate which of the listed members already accepted the role)

In case of doubt, feel free to send us your challenge proposal drafts as early as possible – the challenges chairs will provide you with feedback and answers to questions you may have.
Please submit proposals via Easychair as soon as possible and no later than *22 December 2017*.

Information to the Participants of the Challenges


  • All the challenge papers will be published in a dedicated Springer book as done for the 2014, 2015, 2016, 2017 editions.
  • At least one author of each submitted challenge paper needs to register to the challenge event (not necessary the main conference) for the proposed system to compete in that challenge.
  • If no authors of the submitted challenge paper is registered, that challenge paper will not compete in that challenge but will still be published within the Springer book.

Accepted Challenges


1. Open Knowledge Extraction Challenge 2018.The Open Knowledge Extraction Challenge invites researchers and practitioners from academia as well as industry to compete to the aim of pushing further the state of the art in knowledge extraction from text for the Semantic Web. The challenge has the ambition to provide a reference framework for research in this field by redefining a number of tasks typically from information and knowledge extraction by taking into account Semantic Web requirements and has the goal to test the performance of knowledge extraction systems. This year, the challenge goes in the fourth round and consists of four tasks which include named entity identification, disambiguation by linking to a knowledge base as well as relation and knowledge extraction. The challenge makes use of small gold standard datasets that consist of manually curated documents and large silver standard datasets that consist of automatically generated synthetic documents. The performance measure of a participating system is twofold base on (1) Precision, Recall, F1-measure and on (2)Precision, Recall, F1-measure with respect to the runtime of the system.
Link: https://project-hobbit.eu/challenges/oke2018-challenge-eswc-2018

 

2. ESWC 2018 Challenge Proposal - The Mighty Storage Challenge II.Triple stores are the backbone of most applications based on Linked Data. Hence, devising systems that achieved an acceptable performance on real datasets and real loads is of central importance for the practical applicability of Semantic Web technologies. So far, it is only partly known whether we have already passed this cap. With this challenge, we aim to (1) provide objective measures for how well current systems (including 3 commercial systems, which have already expressed their desire to participate) perform on real tasks of industrial relevance and (2) detect bottlenecks of existing systems to further their development towards practical usage.
Link: https://project-hobbit.eu/challenges/mighty-storage-challenge2018/

 

3. ESWC-18 Challenge on Semantic Sentiment Analysis.The development of Web 2.0 has given users important tools and opportunities to create, participate and populate blogs, review sites, web forums, social networks and online discussions. Tracking emotions and opinions on certain subjects allows identifying users' expectations, feelings, needs, reactions against particular events, political view towards certain ideas, etc. Therefore, mining, extracting and understanding opinion data from text that reside in online discussions is currently a hot topic for the research community and a key asset for industry.The produced discussion spanned a wide range of domains and different areas such as commerce, tourism, education, health, etc. Moreover, this comes back and feeds the Web 2.0 itself thus bringing to an exponential expansion.
Therefore, the Semantic Sentiment Analysis Challenge looks for systems that can transform unstructured textual information to structured machine processable data in any domain by using recent advances in natural language processing, sentiment analysis and semantic web.By relying on large semantic knowledge bases, Semantic Web best practices and techniques, and new lexical resources, semantic sentiment analysis steps away from blind use of keywords, simple statistical analysis based on syntactical rules, but rather relies on the implicit, semantics features associated with natural language concepts. Unlike purely syntactical techniques, semantic sentiment analysis approaches are able to detect sentiments that are implicitly expressed within the text, topics referred by those sentiments and are able to obtain higher performances than pure statistical methods.
Link: http://www.maurodragoni.com/research/opinionmining/events/challenge-2018/

 

4. Scalable Question Answering over Linked Data (SQA) ESWC 2018 Challenge Proposal.
Successful approaches to Question Answering are able to scale up to big data volumes, handle a vast amount of questions and accelerate the question answering process (e.g. by parallelization), so that the highest possible number of questions can be answered as accurately as possible in the shortest time. The focus of this challenge is to withstand the confrontation of the large data volume while returning correct answers for as many questions as possible. We will provide a benchmark of several thousand automatically generated questions. The successful approaches will be able to deal with this vast amount of data and parallelize the answer retrieval process. The task will build on DBpedia 2016-10 as the RDF knowledge base. Participating systems will be evaluated with respect to both the number of correct answers and the time needed.
Link: https://project-hobbit.eu/challenges/sqa-challenge-eswc-2018/

Share on