Paper 105 (Resources track)

A New Approach for Task-oriented Ontology Generation

Author(s): Seiji Koide, Fumihiro Kato, Hideaki Takeda

Full text: submitted version

Abstract: The use of ontology in problem solving applications is one of the best ways achieving the interoperability of information and knowledge among applications. However, it is unlikely that there is a single appropriate ontology that meets the requirement of an individual problem solving task. In fact, an individual ontology is normally the outcome of conceptualization for a single target domain in a fixed perspective, while a practical problem solving task often needs multiple target domains with multiple perspectives. Thus, we need to combine multiple ontologies that differ in target domains and in perspectives for a given problem solving task. In this paper, we propose a new method in which new ontologies are generated from existing Linked Open Data (LOD) sets and ontologies to connect concepts in different ontologies. We have called this approach a task-oriented ontology building approach. We built task-oriented ontologies for a driving guidance system, in which we used DBpedia, LinkedGeoData, and Japanese WordNet as existing knowledge bases, while we newly created a facility ontology, a service ontology, an action target ontology, and an action ontology as glue ontologies. The latter ontologies connected the former knowledge bases, and both of them were utilized to a given task, i.e., discovering driving destinations from demands by a car-driver and passengers.

Keywords: task-oriented; DBpedia; WordNet; LinkedGeoData; facility; service; action; action-target

Decision: reject

Review 1 (by Raghava Mutharaju)

Post rebuttal:
Thank you for the response.
Although the glue ontologies are useful to the authors, they are not general enough to be used by a wider community across several domains. Hence it cannot be considered as a resource. If the glue ontologies were built automatically then that would definitely add a lot of value.
This paper describes an approach named task-oriented ontology building, where a bunch of ontologies are built to connect multiple domains and answer queries. A driving guidance system is used as a use case to demonstrate the utility of this approach.
Following is not clear to me. 
1) Are the ontologies built manually or generated automatically? If they are generated automatically then I think the resource should be the software that generates the ontologies, assuming that it is capable of generating ontologies for queries in *any* domain. On the other hand if the ontologies are manually built, then I don't see why they should be considered as a resource that is of interest to the wider community.
2) In this approach, what are the parts that were automated and what are the ones that need to be done manually?
If the ontologies are generated automatically, then 1) how do you know to which entity in the existing ontologies should the entity in the query be linked to? For example, Zoo from the query was linked to DBpedia Zoo. Do you search for the entity "Zoo" in all the existing ontologies? 2) How do you determine the relation between two entities in the query? For example, Zoo and See.
More questions/commments:
1) What if the queries from the user are not from the domains covered by the built ontologies (facilities, services etc)? What does the system do in those cases?
2) Abstract 2nd line, "of" should be present after "... best ways"
3) Introduction 1st line, "coming" can be removed
4) Introduction end of 1st paragraph, "of" should be present after "... and the usefulness"
5) Introduction 2nd paragraph, rewrite the line "... data sets have already been opened ..."
6) End of page 2, adjust the URL - it goes beyond the right margin
7) Towards the end of page 3, the word "taxonomy" seems misused in the sentence "... taxonomy of the ontology ..."
8) Towards the end of page 4, the phrase "from the background" is confusing.
9) Page 9, Table 1, what do the legends triangle and circle mean?
10) Page 9, Section 3.3, statistics of the datasets were given at the end. What does "rectangular" mean? Why is Hokkaido excluded?
11) Page 10, Section 3.5, what does "within the range expected in use cases" mean? It is mentioned that text is used instead of speech as input for the GPS system. This is not ideal but I guess ok as a starting point of the research. In the pseudocode, what does surface case and deep case mean? What/where are the query templates?
12) Page 11, user is misspelt as "usrs"
13) Page 12, Section 4.2, general description of "common sense reasoning" along with related work is given instead of describing how and where in your application common sense reasoning was used. In the current state, it is hard to understand the role that common sense reasoning plays in your application. The same holds true for Section 4.3.
Review criteria
1) Potential impact: The ontologies are domain specific and it is hard to see why they are different from other ontologies. So I am not sure if they would be of interest to wider Semantic Web community.
2) Reusability: It is not specified in the paper if the ontologies are in use by people other than the ones who worked on this project. 
3) Availability: Resource is available on github but a license is not given. Plan to maintain the resource is also not provided.
Raghava Mutharaju

Review 2 (by Michel Dumontier)

This paper presents a methodological overview of a question answering system for a specific use case: real-time suggestion of places to visit for the passenger of a vehicle, based on their location and submitted requests in natural language. The authors focus on prominent, language-specific (Japanese) linked-data datasets to answer the queries. They encountered gaps in the terminology of the datasets they use, which should be filled in order to address the passenger recommendation system use case. The authors attempt to fill these gaps by developing new "glue" ontologies to bridge the terminologies and datasets they use, for the given use case.
I think the authors use their use case well as a vehicle to highlight common and important unresolved problems in closed-domain question answering. These are: a) how to easily identify and obtain relevant datasets for answering the questions, and b) if you do manage to find relevant datasets, how to easily link these (when they have heterogeneous or incompatible vocabulary). The work might make a good poster to advertise their experience and findings with addressing this task. However, I am doubtful that the contribution is substantial enough to include as a full paper in the Resources track.
The presentation of the paper could be improved a lot. There are numerous inconsistencies and shortcomings with the writing. The explanations are generally obscure and require re-reading to grasp what is being said. Part of the trouble is that much of the details of the methodology (especially of the "glue" ontology design) are omitted. Only an overview is given of the approach. On the other hand, a large portion of the paper explains related work and discussions. I would have rather shortened these sections by forwarding the reader to appropriate references, and given more details of the methodology that they used. I would especially like to know how they converted their natural language queries to SPARQL and how the queries were processed and answered.
Since this is a resource track submission, there should be some reusable or educational output artefact coming out of this paper that may be useful to the community. However, I don't immediately see what that is. Is it the task, facility and service "glue" ontologies? I don't immediately see how these could be easily reused in other scenarios.
For the above-mentioned reasons, I don't feel the contributions of the paper are substantial enough to be accepted as a full paper in the Resources track. I would not mind, however, if it were approved as a poster (but then the presentation of the paper should also be improved).

Review 3 (by Fiona McNeill)

The paper describes a 'glue' ontology that the authors have developed in order to allow them to answer natural language questions in the context of automatically suggesting places a user might want to visit.  This ontology allows them to integrate information from various sources such as Japanese WordNet.
There are two major problems with this paper.
Firstly, there is next to no information about the resource.  There's no link to it, no discussion of how it implements, for example, FAIR data principles, no mention of licence and reuse, no discussion of whether anyone is using it, no discussion about its quality and whether they've done any evaluation on it and no real comparison to other people's work.  I feel that this means that the paper is unsuitable for publication in its current form.
Secondly, it is quite confusing to read.  There is a large amount of very general background which does not belong in a conference paper - these things should be discussed briefly with a reference given so that the reader can be assured you understand what you are building on, rather than discussed over pages.  The authors make several sweeping statements that they don't back up - e.g., 'there are very few practical problem solving task with ontology [sic] at present', 'there has been no remarkable progress on reasoning technology compare [sic] to the era of expert systems' (not even clear what this means), 'the role of verbs has not seriously been regarded in action planning thus far' (this is justified with two citations from the 70s).  The methodology is questionable - e.g., they seem to be putting all of their resources (DBPedia, etc) into a single RDF store but then can't add all the sources because of capacity issues (at least, I think this is what they are saying they did).  There are many places where it is unclear what they have actually done and what they are suggesting should be done at some point.  There is clearly a lot which is not yet implemented, but no clear discussions of how these will be addressed in the future.
In summary, the authors need to be a lot clearer about what their contribution is and, if they want to be accepted in a resource track, they need to be serious about the principles of SW resources.

Review 4 (by anonymous reviewer)

This paper describes an approach to generating an ontology over existing linked data resources to support a number of tasks. In this paper the tasks were related to providing answers to queries for a driving guidance system. The approach brings together a number of techniques and the authors report some positive results but overall I didn't learn much from this paper and there is little detail on how the system was evaluated.
As this is a resource track paper there should be evidence of re-use and links to the running system and/or source code. I couldn't find any of these to test and evaluate the system for myself. Given the criteria outlined for this track, this paper is not acceptable.

Share on

Leave a Reply

Your email address will not be published. Required fields are marked *