Paper 215 (Research track)

A study on SPARQL Endpoint measurement and Recording

Author(s): Zhang Yongjuan, Tao Chen, Dongsheng Wang

Full text: submitted version

Abstract: With the development of semantic web such as semantic technologies and linked data, open semantic data sets are generally emerging, and how to organize, evaluate and pool these data is of great significance to the better use these data. The evaluation of open data at home and abroad mainly includes the field of government data, the other areas are few, the economic benefits, social benefits and other extended benefits of more evaluation, the evaluation of the data itself less, this study based on this to start the data itself Evaluation, focusing on the basic characteristics of data, semantic level and the use of three aspects of evaluation, for the development and use of semantic data set to promote and guide the role.

Keywords: Standard; Evaluation Method; Semantic E Index; Semantic Data Set; SPARQL endpoint; E-RDF

Decision: reject

Review 1 (by Axel Polleres)

(RELEVANCE TO ESWC) The topic is relevant, but the paper does not comply with established standard for papers in this conference. No clear scientific method nor
(NOVELTY OF THE PROPOSED SOLUTION) The establishment of a dataset scoring function is a good idea per se, but since the approach is not explained in a consumable manner, I still have to reject the novelty.
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) I cannot re-produce and assess the scoring nor how it was evaluated exactly.
(EVALUATION OF THE STATE-OF-THE-ART) There is an ok summary of the state-of-the-art ata a high level in this paper, but works like those in fact analyzing and monitoring SPARQL endpoints haven't been cited, e.g. 
Pierre-Yves Vandenbussche, Jürgen Umbrich, Luca Matteis, Aidan Hogan, Carlos Buil Aranda:
SPARQLES: Monitoring public SPARQL endpoints. Semantic Web 8(6): 1049-1065 (2017)
and references therein, would probably be a good starting point for the authors to check.
Also, older works to rank RDF linked data, such as:
Aidan Hogan, Andreas Harth, Alexandre Passant, Stefan Decker, Axel Polleres:
Weaving the Pedantic Web. LDOW 2010
Also, the authors mention a lot of SW uptake examples in the introduction, and related work mentioned in prose in Section 2, but without providing any references. In general there is a lack of references.
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) I cannot identify a clear approach being proposed. 
BTW: E-RDF is a name established in the community for something else, e.g. 
Jos de Bruijn, Stijn Heymans:
Logical Foundations of (e)RDF(S): Complexity and Reasoning. ISWC/ASWC 2007: 86-99
I would suggest to use a different acronym or at leasrt explain whe the E stands for.
Table 1 which seems to contain the gist of what is proposed is not consumable/readable.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) The proposal seems to base on an earlier publication [3] but isn't even summarized in its basices to make the work consumable.
The scoring for the 24 datasets is not explained really, how did you get to the scores in detail, who assessed the subscores and how exactly?
(OVERALL SCORE) The work it not up to the standards of re-producible, scientific work I am afraid. I strongly advice the authors to explain their scoring model and evaluation in more detail, and make it available in re-producible form. As mentioned above, in principle building measurements to analyse and rank datasets is a good idea, but also the state of the art and existing literature should be taken into account, especially referencing them.


Review 2 (by anonymous reviewer)

(RELEVANCE TO ESWC) The paper reports results of an evaluation of some linked datasets provided via SPARQL
endpoints. In this way, it is strongly relevant to ESWC.
(NOVELTY OF THE PROPOSED SOLUTION) Neither the evaluated datasets/endpoints nor the criteria for the evaluation are novel. Also,
the results are not really surprising.
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) In this work, the authors propose some criteria for evaluating datasets. However, the paper
lacks a clear description of these criteria, e.g. E-RDF and Semantic-E Index are mentioned
as evaluating principles without providing the details. Thus, only table 1 gives a set of
criteria but without explaining how the points have been obtained.
(EVALUATION OF THE STATE-OF-THE-ART) In the related work section only some open data reports are mentioned without a clear comparison
to the proposed evaluation method.
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) The paper lacks a clear explanation of the evaluation method. Furthermore, the study is not
really motivated and the results - apart from the observation that China provides only a few 
semantic datasets - are not discussed.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) Because the actual method is not clearly described, the study is not reproducible.
(OVERALL SCORE) The paper presents results of a study about the quality ("maturity") of some linked datasets/endpoints
according to some criteria.
Strong: none, sorry
Weak 1: The paper is poorly written with many typos and grammar problems making it difficult
to understand the main idea. The paper is not self-contained lacking a better explanation of E-RDF and
Semantic E-Index.
Weak 2: Apart from comparing some datasets the contribution is very small. Futhermore, the datasets
are not really comparable, e.g. DBpedia in different languages, IEEE Papers, uniprot. Does it really
make sense to compare them? 
Weak 3: The evaluation method used in this work is not clearly described.


Review 3 (by Kuldeep Singh)

(RELEVANCE TO ESWC) The research is in the direction of evaluating semantic data with SPARQL endpoints (e.g. DBpedia). Hence, the paper fits in the domain of ESWC.
(NOVELTY OF THE PROPOSED SOLUTION) In the evaluation method, authors propose a point based evaluation of Semantic Data. However, why particular dimension of evaluation is considered with higher points, why rest with low points, it is unclear.  In Section 3.2.2, there is a table that illustrates this point based scheme. This table is poorly structured and it is very hard to follow the table. However, I do believe the idea is novel, but not well presented in the paper. I advise authors to be more clear while they explain their idea. Please do explain your selection criteria for the metric, why some points have been included in the evaluation. For example,  "Effect performance" is one of the criteria, but its definition is not explained properly. Similarly, it holds true for many other parameters in the table.
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) If we consider a methodology followed by authors to evaluate different datasets, and assign a pointwise system to evaluate the completeness and maturity of data, it is a novel approach. But, the approach is incomplete without the proper breakdown of the metric parameters. In the final evaluation section, and the approach proposed, I could not find a synergy i.e. in approach a detailed breakdown of assigned points is given, but in the evaluation, a total obtained points have been given.
(EVALUATION OF THE STATE-OF-THE-ART) There is no mention of state-of-the-art. There has been no proper reference made to previous work in this direction. The term semantic data maturity is not explained.
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) In the evaluation section, I expect authors to put more emphasis on their analysis of each dataset. Why a particular dataset got more score and the justification behind it would be very interesting insight. How authors define a dataset in terms of its semantic maturity (as per the title)- does it related to the obtained score, or it also depend on other parameters like in case of big data maturity model.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) With missing insights of how datasets have been assigned values based on points mentioned in the approach section, it is very difficult to reproduce the results. Further, there is no link of open repository where the experiment results have been placed.
(OVERALL SCORE) Summary: The paper tackles the problem of evaluating datasets with SPARQL endpoints based on different parameters defined by authors. The idea is novel, and I appreciate authors to trigger discussion in this direction. Authors evaluate several datasets based on a point based system (i.e approach proposed) and assign a final score on its semantic maturity. 
Strong Points: Authors propose a deep insightful approach to evaluate different datasets. Each parameter in the approach has its pre-assigned value. The considered datasets are large in number, that also provide a thorough evaluation of the approach.
Weak Points: In spite authors propose a new approach, the approach itself has flaws. It is not clear why a particular parameter is assigned to higher points, is there any previous study which also does such kind of analysis, if yes then how author's approach is different from them. The related work and included references are incomplete. In the evaluation section, just a table is provided, but with no discussion why particular dataset is scoring higher score etc. The paper needs grammar, spell check, and the inclusion of references in the proper format.
Questions to Authors: Is there other study which also considers such point based evaluation metric of datasets? 
In related work section, authors have included many names like Open Data Readiness Assessment of the World Bank etc, but no reference has been made to them. For a new reader, these terms are not generic. Any special reason to exclude references in the paper:
Suggestions: 1. Please improve grammar, spellings, and proper references. The paper needs more formatting in terms of style, and spacing. The table 1 needs complete re-structuring. 
2. Please provide associated reasoning and explanation why this work is important in introduction section.


Review 4 (by anonymous reviewer)

(RELEVANCE TO ESWC) The topic about the quality of rdf datasets is relevant for ESWC
(NOVELTY OF THE PROPOSED SOLUTION) It is not clear to what extend the solution advances the current state of the art
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) cannot be assessed based on the content and details in the paper
(EVALUATION OF THE STATE-OF-THE-ART) State of the art is ignored, as such no evaluation
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) The discussion of the properties of the approach is mainly missing and a demo is linked.
But it does not really showcase the approach as such.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) the experiment cannot be reproduced since source code and versions of the data is not linked
(OVERALL SCORE) ## Description
**Short description of the problem tackled in the paper, main contributions, and results** 
The paper addresses the evaluation of the semantic maturity of data and proposes E-RDF and an Semantic-E index to assess semantic degrees. 
The general idea of using a quality assessment for RDF datasets is obviously of high relevance, but the authors seems to ignore existing work and do not position their paper against related efforts.
Also the description of the quality assessment and metrics is very hard to follow and as such it is hard to evaluate the usefulness at the current state. 
IMHO, the current version of the paper requires more work to be considered for acceptance at ESWC. 
I would encourage the authors to continue their interesting research.
As a suggestion, the authors should reorganise the paper and focus on discussing related efforts and on the presentation of their quality assessment and the processing of datasets.
## Strong Points (SPs) 
** Enumerate and explain at least three Strong Points of this work** 
* interesting topic of assessing the quality of RDF datasets
## Weak Points (WPs) 
** Enumerate and explain at least three Weak Points of this work**
* formatting  of the paper (title, linespacing, ...)
* related work does not cover major publications about the various quality dimensions of RDF/Linked Data/SPARQL endpoints.
* metrics and assessment is not clear
## Questions to the Authors (QAs) 
** Enumerate the questions to be answered by the authors during the rebuttal process**
Q1) How does E-RDF and Semantic-E differ from and advance the work " Quality Assessment for Linked Data: A Survey, Semantic Web Journal 2012"
### Related Work
* Jeremy Debattista, SÖren Auer, and Christoph Lange. 2016. Luzzu—A Methodology and Framework for Linked Data Quality Assessment. J. Data and Information Quality 8, 1, Article 4 (October 2016), 32 pages. DOI: https://doi.org/10.1145/2992786
* Crowdsourcing Linked Data Quality Assessment, ISWC 2013
* Methodology for Assessment of Linked Data Quality (LDQ04)
* Quality Assessment for Linked Data: A Survey, Semantic Web Journal 2012
* SPARQLES: Monitoring Public SPARQL Endpoints, Semantic Web Jounral 2016
* See also the workshop on Linked Data Quality at ESWC
### Minor
* abstract : ".on" -> "on"
* Consistent writing of "semantic web", "linked data" -> capitalise


Metareview by Amrapali Zaveri

The reviewers list serious concerns with the quality of this paper. The authors should consider previous works and clarify the advancements in the state-of-the-art. Additionally, evaluation metrics need to be refined and results should be made reproducible. As such, this paper does not seem of a sufficient level for ESWC2018.


Share on

Leave a Reply

Your email address will not be published. Required fields are marked *