Benchmarking and Empirical Evaluation

Description


The Semantic Web and Linked Data has been an active area of research for several years. For the first time, the 15th edition of the Extended Semantic Web conference runs the Benchmarking and Empirical Evaluation Track. Its goal is to provide a place for in-depth experimental studies and benchmarks of significant scale, which have been normally considered as part of the potential submissions in other regular tracks. It aims at promoting experimental evaluations in Semantic Web/Linked Data research trying to create a research cycle between theory and experiments as in other sciences (e.g., physics).
A typical paper for this track has to set its focus on the verification of an existing method by applying it to a specific task, and reporting the outcome of an experiment on an established or new dataset.
Papers in this track can fit in different categories:

  • Comparative evaluation studies comparing a spectrum of approaches to a particular problem and, through extensive experiments, providing a comprehensive perspective on the underlying phenomena or approaches.
  • Analyses of experimental results providing insights on the nature or characteristics of studied phenomena, including negative results.
  • Result verification focusing on verifying or refuting published results and, through the renewed analysis, help to advance the state of the art.
  • Benchmarking, focusing on datasets and algorithms for comprehensible and systematic evaluation of existing and future systems.
  • Development of new evaluation methodologies, and their demonstration in an experimental study

Public availability of experimental datasets is highly encouraged and will be considered in the evaluations, hence exceptions to this will need to be sufficiently justified.
In addition, these papers will be more specifically judged on the basis of their:

  • Precise description of controlled experimental conditions
  • Reproducibility
  • Applicability range (broad coverage results being better than narrow applicability).
  • Validity of the evaluation methodology (size of dataset, significance tests, ..)

Special attention will be paid to reproducibility. Hence, experimental settings will have to be extensively described so that the results could be independently reproduced, so that counter-experiments could be designed and subsequent work is enabled to improve over the presented results.
The experimental approach is based on systematic scientific methods used in many disciplines. The page http://explorable.com provides a good collection of methods, experiment design strategies, and resources.

 

Topics:


Topics of interest include, but are not limited to:

  • Management of Semantic Web data and Linked Data
  • Languages, tools, and methodologies for representing and managing Semantic Web data
  • Database, IR, NLP and AI technologies for the Semantic Web
  • Search, query, integration, and analysis on the Semantic Web
  • Robust and scalable knowledge management and reasoning on the Web
  • Cleaning, assurance, and provenance of Semantic Web data, services, and processes
  • Semantic Web Services
  • Semantic Sensor Web
  • Semantic technologies for mobile platforms
  • Evaluation of semantic web technologies
  • Ontology engineering and ontology patterns for the Semantic Web
  • Ontology modularity, mapping, merging, and alignment
  • Ontology Dynamics
  • Social and Emergent Semantics
  • Social networks and processes on the Semantic Web
  • Representing and reasoning about trust, privacy, and security
  • User Interfaces to the Semantic Web
  • Interacting with Semantic Web data and Linked Data
  • Information visualization of Semantic Web data and Linked Data
  • Personalized access to Semantic Web data and applicationsSemantic Web technologies for eGovernment, eEnvironment, eMobility or eHealth
  • Semantic Web and Linked Data for Cloud environments

Acknowledgements
The text from this CfP is partially based on the Call for Evaluation Papers for ISWC 2013 one by Chris Biemann and Josiane Xavier Parreira.

Share on