Paper 168 (Research track)

Device-Independent Visual Ontology Modeling

Author(s): Vitalis Wiens, Steffen Lohmann, Sören Auer

Abstract: The development of ontologies typically involves ontology engineers and domain experts with different backgrounds in semantic technologies and ontology modeling.
Domain experts, who provide the conceptualization of the knowledge domain, often lack modeling skills and find it hard to follow logical notations in OWL representation.
Visualizations of ontologies, in particular graph visualizations in the form of node-link diagrams, are commonly used to support ontology modeling and related tasks.
In order to more directly involve domain experts in ontology modeling, approaches that are immediately available, easy to use, and independent of the device and interaction context are needed.
We present a device-independent approach for visual ontology modeling that reduces the entrance barrier to engage in ontology modeling.
The device-independence is achieved by different modes of operation, including mouse and touch interactions, exploitation of device sensors and smart interaction techniques (such as speech recognition).
Guidance in the ontology modeling and editing process is provided with respect to the OWL specifications using built-in constraints.
The results of a comparative user study clearly indicate the benefits of the presented approach.

Keywords: Ontology Engineering; Visual Modeling; Mobile Devices; Touch Interaction; Device-Independence; Visualization; OWL; VOWL; WebVOWL

Share on

Leave a Reply

Your email address will not be published. Required fields are marked *