Paper 26 (Research track)

A Method for Generating Schema Diagrams for OWL Ontologies

Author(s): Cogan Shimizu, Nazifa Karima, Adila Krisnadhi, Pascal Hitzler

Full text: submitted version

Abstract: Interest in the Semantic Web is steadily increasing. In order to support ontology engineers and domain experts, it is necessary to provide to them robust tools for facilitating the engineering process. In many cases, the schema diagram is the single most important tool for quickly conveying the overall purpose of an ontology pattern or module. In this paper, we present a method for programmatically generating a schema diagram from an OWL file. We evaluate its ability to generate schema diagrams similar to existing schema diagrams and outperform the visualization tools, VOWL and OWLGrEd, for this purpose. In addition, we provide an implementation of this tool.

Keywords: schema diagram; tool; owl; ontologies

Decision: reject

Review 1 (by Daniel Garijo)

(RELEVANCE TO ESWC) The paper is highly relevant to the conference, and deals with a very timely topic.
(NOVELTY OF THE PROPOSED SOLUTION) Although there are many visualization approaches, the paper proposes a novel solution that relies on the preferences of the authors. It remains unclear whether the proposed solution is really helpful for users (see below)
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) The proposed solution seems to be correct according to the authors. There are some decisions which could be confusing, such as removing unions from the diagram. However this is not properly evaluated.
(EVALUATION OF THE STATE-OF-THE-ART) The authors do a good job on overviewing a set of tools for ontology visualization. I would suggest removing rdf gravity, as it was not really tested or properly described.
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) - The evaluation seems to be heavily biased towards the preferences of the authors:
- Most of the vocabularies used in the evaluation are in fact coauthored by 1 or more of the authors. Therefore it looks as if the tools was specifically tailored for them. There are many ontology papers with diagrams. In order to ave a consistent evaluation, the authors should test their tool in vocabularies that are not of their own.
- WebVowl can simplify contents based on the branching factor of vocabularies. Have the authors compared their tool in these kind of settings? Also, if I don't remember wrong, WebVowl authors have presented in the last KCAP a new approach which performs several kinds of abstractions on visualizations. Maybe that would be a fair comparison. At the moment, it's even stated in the paper that the comparison of the proposed tool is not fair. 
- I don't understand how is it possible for webvowl and owlgred to have less true positives than SDont. One of the reasons why the proposed approach seems useful is that it actually simplifies diagrams. I would expect both WebVowl and owlgred to have a higher FP, as it happens. But not a lower TP rate! Since the evaluation is not available I wasn't able to check.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) - I have downloaded and tried the tool. And I was very excited, because I have been looking for a tool like this for a while. However, when I went to the download page I found the following:
1) The source code of the tool, besides stated in the text to be open source, is not available online.
2) The results of the evaluation are not accessible either. The nearest thing I found is a pdf which looks like a scanned copy of a hand-made comparison, very difficult to read.
3) I downloaded and executed the JAR, both in the console and GUI options. I started using my own vocabularies, but due to errors I resorted to the ones provided in the Using the GUI, I could not load any vocabulary. The GUI does not produce anything and does not give any errors. The save option is not implemented. Regarding the console mode, the tools asks for a name. I tried with folders and files, without success. As I result, I only got an exception.
I could see the diagrams in the resources, and I thought they were very cool. I would like to encourage the authors to improve, release and resubmit their approach to another conference or workshop.
The authors have not answered my concerns. Hence I keep my original score.
This paper introduces a method for generating diagrams in an automated manner from ontologies. 
Besides the points specified above (reproducibility and evaluation), I have little to add. On the bright side, the diagrams of the tool looks very nice. However, at the moment SDONT does not seem mature enough for the ESWC venue. The bias in the evaluation is something that should be really improved, as otherwise the paper looks a little weak.
Furthermore, the authors state that the diagrams are made according to their previous experience, or because they have "found them to be useful". This is a little weak as well, and the publication would be more robust if a user-survey could be conducted. Removing statements like unions for two arrows between the classes may lead to confusion.

Review 2 (by Silvio Peroni)

(RELEVANCE TO ESWC) The work described in the paper is very relevant for the conference.
(NOVELTY OF THE PROPOSED SOLUTION) Honestly, the solution proposed is novel, since it tends to simplify the amount of information that is visualised. However, the fact that this is a good thing must be demonstrated formally - by means of a questionnaire with possible users (see below).
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) The interface of the tool is very minimal, with only the "open" and "save" functions of which only the first one is really working. I tried to open an ontology (i.e. PRO,, it did not return me anything - no message, no feedback illustrating that it was loading it, nothing. Then I tried with chessgame.owl provided in the resources available in the software website, and still no visualisation. That's disappointing since the evaluation of the tool has been run, and it was used according to what the paper is saying. Would it be possible that the jar uploaded in the tool website was an old version? My configuration was: iMac Retina 4K, macOS High Sierra (10.13.3), 16GB RAM, last version of Java installed.
(EVALUATION OF THE STATE-OF-THE-ART) The evaluation is fine, even if it strictly focusses only on some visualisation tool, some of them very suited to a particular task - e.g. MEMO GRAPH.
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) There are some claims in the paper that should be demonstrated in a more definite and precise way. The main point is that the tool is developed by considering only a limited amount of information contained in the ontology. The authors say that visualising every axiom and relation is undesirable, but this claim has not been explicitly supported by any specific evidence (e.g. by the outcomes of a questionnaire). 
In addition, the authors say that, if a large ontology is considered, any schema diagram becomes essentially unreadable if it gets too large. This is true, but there could be strategies (in particular for dynamic diagrams, where a user can interact with) that could still be able to visualise the main components of such ontologies - e.g. see the approach used in KC-Viz (
The fact that the "visualization for every axiom and relation" is "undesirable for a schema diagram in our opinion" should be at least discussed with more details. While the authors say explicitly this is their opinion, I'm not sure this is true for any possible situation which involve ontology diagrams. For instance, there could be scenarios where having more axioms visualised could be an added value?
Finally, even if a minor point, the fact that the authors were not considering the layout aspect of the diagram generated at all (which is a quite important aspect in such tools) is a bit limiting - and it makes evident that the tool they present is still not enough mature for being used in real activities.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) To me, this is the main big concern about the whole paper. In terms of reproducibility, the fact that the tool developed by the authors is not working (as mentioned before) is indeed a problem – even if one in principle could directly reimplement it by using the algorithm described in the paper.
However, the most significative issue is about the test itself, which I've found biased and, honestly, unfair. 
First of all, the gold standard of ontology diagrams used comes from papers/ontologies developed by the same authors. Of course, it is reasonable to think that SDOnt follows the same design principles of the diagrams considered, since they share the same authors. Thus, this situation makes the tool in a favorable position with respect to the others. It would have been better to select the diagrams of ontologies developed by people that are not involved in this work.
In addition to that, the other tools involved in the evaluation have been developed with other goals in mind – i.e. covering the maximum set of axioms included in an ontology – as the authors of this paper indeed state: "Our results do not invalidate VOWL or OWLGrEd: They simply serve other purposes". Thus, a comparison like the one proposed is actually biased, since only one tool (the one developed by the authors) complies with the prerequisites the authors themselves wanted to assess.
Under these conditions, it was clear since the beginning that SDOnt will result in having the best performances, and makes the whole evaluation quite useless, in the end. In fact, in order to verify if the contents in the diagram produced by each of the tools involved in such a test is better than the others, the only option would be to directly involve the target users, and asking them what do they think of the results provided by the tools.
(OVERALL SCORE) In this paper, the authors introduce SDOnt, a tool that is able to show some aspects of an OWL ontology as graphical diagrams, so as to make it possible to discuss about it with domain experts. In particular, they present the algorithmic approach for selecting the nodes and edges to create, and they illustrate the evaluation of a test they have done by involving other two tools, i.e. VOWL and OWLGrEd.
Some typos:
- no ref: "preferably in the form of rules [?]"
- capital letter after the colon: "reverse the process: We want to"
Strong Points (SPs)
1. the conversion process from owl to a diagram is an important task
2. interesting method for selecting what to visualise
3. reasonable related works section
Weak Points (WPs)
1. the tool is not working
2. the test is very self-referential
3. users are not involved in the evaluation
As clarified in the comments above, I have several doubts on the evaluation part that totally prevent me to consider this work as acceptable to be presented at the conference. I'm really looking forward to hearing from the authors on all the points I've raised.
--- after rebuttal phase
No rebuttal has been provided by the authors. Thus I confirm my score.

Review 3 (by María Poveda-Villalón)

(RELEVANCE TO ESWC) The topic of the paper is relevant for ESWC.
(NOVELTY OF THE PROPOSED SOLUTION) The method seems new however there are similar approaches and it not that different.
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) The methods seems complete but in my opinion to a limited view of the problem.
(EVALUATION OF THE STATE-OF-THE-ART) Main related used systems are mentioned.
(DEMONSTRATION AND DISCUSSION OF THE PROPERTIES OF THE PROPOSED APPROACH) The evaluation provided is not enough (see overall score).
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) The study could be reproduced but the system currently published does not work.
(OVERALL SCORE) This paper presents a system to generate diagrams from ontology files. The topic is definitely interesting and a system like this would be very useful for ontology developers, also it has a great potential to be integrated in existing ontology developing tools and suites.
My main reasons to reject the paper are the weak evaluation provided and the fact that the system does not run, at least for me (please note that I might have some wrong ideas of how the system works as I haven't been able to check somethings on my own, sorry about that if it is the case for some comments). This is actually a pity as I did love the fact that the link to the system is provided in the very first page.
Regarding the evaluation, authors make a comparison between the system's output and other two systems' results. My main concern is about the selected ontologies used for reference diagrams. The selection seems quite biased as from 11 ontologies or ODPs 9 contain at least one common author with the presented paper. It seems to me that the system is tailored for a quite narrow view of "most useful diagrams". I completely understand that authors claim experience with domain experts, then do their diagrams according to their lesson learnt and then developed the system, IMO, to match their view on usefulness. A more fair evaluation, or at least a way to check how realistic are those lessons learnt about visualization would be to compile a representative set of diagrams associated to ontologies developed in many contexts. For example, following a methodological approach like in [1] page 4 "Data collection: Two inclusion criteria were defined for the selected papers as follows: (i) the paper should be about ontology, … the paper should describe the process of ontology development... Ontology papers from the JWS were retrieved using the query..." or just looking among those in LOV.
Also, having an additional evaluation about how do ontology and domain experts perceived the usefulness of the different diagrams would be advisable to provide this broader view.
A less critical point is about the method (step 3, 4 and case 2 in step 5). I find this point a bit controversial. Do you differentiate somehow the links between classes when they come from classes local restrictions vs when they are defined as domain and range? If yes, is that shown as a legend in the drawing? Otherwise it could lead to a misunderstanding of the ontology actual code. Also, do you differentiate between subclassOf and equivalentClass axioms when used in local restrictions?
My suggestion for the work at this stage would be to submit to a resources track where would be very welcome (once the running issue is solved).
.- Page 4, rule 5. What do authors refer to with "atomic"? Does it mean "named classes" (this term is used in the second bullet in the same page)? *Typos*.- Page 4 ref "[?]" -> it seems that the references has not been created properly.
.- Page 5 "namly" -> namely
[1] Auriol Degbelo. 2017. A Snapshot of Ontology Evaluation Criteria and Strategies. In Proceedings of the 13th International Conference on Semantic Systems (Semantics2017), Rinke Hoekstra, Catherine Faron-Zucker, Tassilo Pellegrini, and Victor de Boer (Eds.). ACM, New York, NY, USA, 1-8. DOI:

Review 4 (by anonymous reviewer)

(RELEVANCE TO ESWC) The paper presents a method for automatically generating diagrams from OWL files. It is based on the observation that ontologies are often developed by first designing the schema diagram and then creating an OWL file that translates the diagram into OWL axioms, which precisely capture the underlying intention of the possibly ambiguous diagram. This generated OWL file might then be called a "diagram-informed" OWL ontology. In the paper, the authors address the inverse approach, which generates an "OWL-informed" diagram from an OWL file.
The approach consists of several rules that generate a diagram with the aim to balance complexity and understandability. The rules were defined by interactions with domain experts from different fields and follow certain principles (e.g., "subClassOf" relations to "owl:Thing" are omitted, no logical connectives or complex axioms, other than "subClassOf" between named classes, omitting "owl:disjointWith" and "owl:inverseOf" relations, etc.).
Additional axioms are provided to break down complex OWL axioms to atomic rules for node-link diagram visualizations (e.g., union, intersection, multiple occurrence of identical object properties, etc.). The generated diagrams are compared against a "gold standard", which consists of diagrams that were published together with the ontologies by the respective ontology authors.
The evaluation comprises of a direct comparison against the gold standard for ontology visualizations created with WebVOWL, OWLGrEd and SDOnt (the implementation of the proposed approach). The results are presented as a table showing the F1 scores for the individual tools. As criteria for the evaluation, the following method is used: For every node-edge-node artifact in the reference diagram (gold standard), it is checked if it also occurs in the generated diagram, for each tool individually and vice versa.
(NOVELTY OF THE PROPOSED SOLUTION) Although the paper addresses a relevant problem - the generation of visualizations from OWL files, which are balanced in complexity and understandability -, this is not a new topic and the argumentation has some weaknesses. Several ideas are not new and have, for instance, been already addressed in the referenced work on VOWL and WebVOWL, which is very related to the presented approach (e.g., the use of visualizations to spot errors in OWL ontologies or the idea of color-coding namespaces as in many hand-crafted diagrams).
(CORRECTNESS AND COMPLETENESS OF THE PROPOSED SOLUTION) The evaluation of this paper is well-documented and all ontologies are provided as OWL files with their corresponding reference diagrams. Additionally, a JAR file with the implementation is provided online. However, I could not test the tool, as it is not clear to me how to use it (a GUI appears, which only contains one "open" and one "save" button, after opening an OWL file nothing happens, and no messages are provided what could be wrong; saving is also not yet supported a message says).
Unfortunately, some methodological details are missing. It is, for instance, unclear which configuration of WebVOWL has been used in the evaluation: WebVOWL provides several filter controls that allow to hide selected details in the visualization, such as information on class disjointness, on datatype properties, or intersections and unions of classes, all of which were considered information that should not be shown according to the paper authors.
This makes me doubt that the presented approach indeed outperforms VOWL if WebVOWL would only be configured appropriately. WebVOWL could be even more powerful in the described setting, as it gives ontology authors much more degrees of freedom by not only allowing them to control what information to show and hide in the ontology diagram but by also enabling a flexible arrangement of the nodes in the diagram without being bound to some predefined and fixed configuration and layout (although the layout of the diagram was explicitly not subject of investigation here).
(EVALUATION OF THE STATE-OF-THE-ART) The evaluation is a noteworthy example of a much needed comparison of ontology visualization tools. However,  the state-of-the-art is only briefly summarized in Section 2. The selection of related works appears a bit arbitrary and not up-to-date. The works are only summarized but not further discussed. An exception are VOWL and OWLGrEd, which are discussed in more detail and are used as baseline in the evaluation.
(REPRODUCIBILITY AND GENERALITY OF THE EXPERIMENTAL STUDY) The evaluation is well-documented and all ontologies are provided as OWL files with their corresponding reference diagrams.
(OVERALL SCORE) see above and below

Metareview by Christoph Lange

The reviewers agree that this submission addresses a relevant problem; however, there are several serious issues, which justify a rejection:
* The system didn't run for any reviewer who wanted to try it.
* The evaluation is biased because the authors mostly chose ontologies they co-authored.
* The novelty of this work w.r.t. the state of the art is limited.

Share on

Leave a Reply

Your email address will not be published. Required fields are marked *