Resources Track

Important Dates


  • Abstract submission: Friday 5th January 2018
  • Paper submission: Friday 12th January 2018
  • Opening of rebuttal period: Friday 16th February
  • Closing of rebuttal period: Wednesday 21st February
  • Notification to authors: Friday 2nd March
  • Camera ready papers due: Friday 23rd March

 

The chairs can be contacted at eswc2018-resource-track-chairs@googlegroups.com

Scope and Topics


Many of the research efforts in the areas of the Semantic Web and Linked Data focus on publishing scientific papers that prove a hypothesis. However, scientific advancement is often reliant on good quality resources, that provide the necessary scaffolding to support the scientific publications; yet the resources themselves rarely get the same recognition as the advance s they facilitate. Sharing these resources and the best practices that have lead to their development with the research community is crucial, to consolidate research material, ensure reproducibility of results and in general gain new scientific insights.

The ESWC 2018 Resources Track aims to promote the sharing of resources including, but not restricted to: datasets, evaluation benchmarks and annotated corpora, ontologies and vocabularies, knowledge graphs of remarkable interest, machine learning models (e.g. embeddings) that are not trivial to (re-)train, software frameworks, tools, libraries and APIs attached to an open license, etc. In particular, we encourage the sharing of such resources following best and well established practices within the Semantic Web community, including the provision of an open license and a permalink identifying the resource. This track calls for contributions that provide a concise and clear d escription of a resource and its usage.

  • A typical Resource track paper may reporting on one of the following categories, though the list is not exhaustive:
  • Ontologies developed for an application, with a focus on describing the modelling process underlying their creation and their usage;
  • Datasets and annotated corpora produced to support specific evaluation tasks;
  • Knowledge graphs or remarkable interest that comprehensively cover new vertical domains;
  • Machine learning models that would impact the knowledge engineering community. Examples include comprehensive word embeddings trained on large corpora, models for computer vision such as Inception v3 or ResNet50, CRNN for music, etc. See also https://en.wikipedia.org/wiki/List_of_datasets_for_machine_learning_research 
  • Description of a reusable research prototype / service supporting research or applications;
  • Description of community shared software frameworks that can be extended or adapted.
  • Specific Review Criteria for Resource Papers
  • The program committee will consider the quality of both the resource and  the paper in its review process. Therefore, authors must ensure unfettered access to the resource during the review process, ideally by the resource being cited at a permanent location specific for the resource. For example, data available in a repository such as FigShare, Zenodo, or a domain specific repository; or software code being available in public code repository such as GitHub or BitBucket. The resource MUST be publicly available.

We welcome the description of well established as well as emerging resources. Resources will be evaluated along the following generic review criteria that should be carefully considered both by authors and reviewers:

Potential impact

  • Does the resource break new ground?
  • Does the resource plug an important gap?
  • How does the resource advance the state of the art?
  • Has the resource been compared to other existing resources (if any) of similar scope?
  • Is the resource of interest to the Semantic Web community?
  • Is the resource of interest to society in general?
  • Will the resource have an impact, especially in supporting the adoption of Semantic Web technologies?
  • Is the resource relevant and sufficiently general, does it measure some significant aspect?

Reusability

  • Is th ere evidence of usage by a wider community beyond the resource creators or their project? Alternatively, what is the resource’s potential for being (re)used; for example, based on the activity volume on discussion forums, mailing list, issue tracker, support portal, etc?
  • Is the resource easy to (re)use?  For example, does it have good quality documentation? Are there tutorials availability? etc.
  • Is the resource general enough to be applied in a wider set of scenarios, not just for the originally designed use?
  • Is there potential for extensibility to meet future requirements?
  • Does the resource clearly explain how others use the data and software?
  • Does the resource description clearly state what the resource can and cannot do, and the rationale for the exclusion of some functionality?
  • Design & Technical quality:
  • Does the design of the resource follow resource specific best practices?
  • Did the authors perform an appropriate re-use or extension of suitable high-quality resources?  For example, in the case of ontologies, authors might  extend upper ontologies and/or reuse ontology design patterns.
  • Is the resource suitable to solve the task at hand?
  • Does the resource provide an appropriate description (both human and machine readable), thus encouraging the adoption of FAIR principles? Is there a schema diagram? For datasets, is the description available in terms of VoID/DCAT/DublinCore?
  • If the resource proposes performance metrics, are such metrics sufficiently broad and relevant?
  • If the resource is a comparative analysis or replication study, was the coverage of systems reasonable, or were any obvious choices missing?

Availability

  • Is the resource (and related results) publishe d at a persistent URI (PURL, DOI, w3id)?
  • Does the resource provide a licence specification? (See creativecommons.orgopensource.org for more information)
  • How is the resource publicly available? For example as API, Linked Open Data, Download, Open Code Repository.
  • Is the resource publicly findable? Is it registered in (community) registries (e.g. Linked Open Vocabularies, BioPortal, or DataHub)? Is it registered in generic repositories such as  FigShare, Zenodo or GitHub?
  • Is there a sustainability plan specified for the resource? Is there a plan for the maintenance of the resource?
  • Does it use open standards, when applicable, or have good reason not to?

 

Submissions should follow the rules indicated here.

 

Share on